@CalcProgrammer1@lemmy.ml avatar

CalcProgrammer1

@CalcProgrammer1@lemmy.ml

Software Engineer, Linux Enthusiast, OpenRGB Developer, and Gamer

Lemmy.world Profile: lemmy.world/u/CalcProgrammer1

This profile is from a federated server and may be incomplete. View on remote instance

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

GitLab has gone downhill over the past several years to the point I cannot recommend it anymore. Requiring a credit card is a kick to the face of younger devs wanting to get their feet wet in open source. The CI minutes that free accounts and FOSS projects get is insultingly pathetic. Their open source program that you have to apply for is intentionally annoying, requiring you to manually get re-approved yearly and the benefits only work for FOSS projects under a group, not a personal account. It's tolerable if you self-host your own runners and forget their shit excuse for a managed CI exists, but I'm also running into this super annoying issue where I get signed out of Gitlab almost daily and have to re-login and enter a verification code from my email. I have my project mirrored to Codeberg and if Codeberg had better CI I'd move completely, even if it were self hosted. Gitlab has gone way downhill since I moved to them after MS bought Github.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I don't want to move my project to a group, which is the only way to use those minutes. It used to be that any public project with a FOSS license got access to the FOSS minutes but now only the ones they approve do, and as I said, there are restrictions like having to have the project under a group. At least gitlab-runner is self hostable, but it's a depressing mess compared to what it used to be.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Hopefully this knocks down Tesla's dominance in the charger ecosystem honestly, we need competition to take over that aren't tied to a single vehicle manufacturer. Yes Tesla was going to open their network up to third party cars but they're taking their sweet time in doing so. I hope competitors were able to swoop in and hire talent and take over broken contracts on abandoned charging station projects.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I would love to see gas stations putting in EV chargers, especially gas stations known for their food and snacks or travel stops that have restaurants because of the additional time taken to charge an EV vs. fill a gas car. Also it would be nice to see established companies run EV chargers that just let you pay with card at the "pump" like you do for gas rather than this app and account bullshit that all the mainstream networks have.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I find 1080p to be too small these days. For desktop use I like 1440p or 2160p (4K). For video, I don't notice the difference between 1080 and 4K too much but for productivity it is a massive step up. My laptop is a 14" 1440p screen and I have an older laptop with a 13" 1440p screen. I use both with 100% scaling (no enlargement) and it's fine. I don't think it's hard to see and I love having the extra screen real estate for coding and multitasking. Being able to have 2 windows side by side and still have enough room on each for a decent length line of code is great. For my desktop, I used a 28" 4K for a long time and being able to have 4 1080p windows open is amazing. 28" 4K is the same PPI as 14" 1080p, and I am already comfortable with 14" 1440p so from a reasonable distance it's no problem. I went to a 27" 1440p for a while on my desktop after that because I upgraded to a 144Hz VRR display, but just last fall I again upgraded to a 32" 4K 144Hz VRR and it's great. No problem with reading text at 100% scaling from a normal distance and it's amazing for games. I do notice games being clearer at 4K but I mainly got the 4K monitor for productivity as I missed it and now that 144Hz 4K was available I wanted it back.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Recommendations and App Promotions sound an awful lot like ads to me. Showing me things I didn't ask for that you wish to sell me....that's called advertising and I don't care what dumb name you call it, they're still ads. Show me only what I actually want to see - the stuff I explicitly choose to pin to my personalized Start menu.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Same. I started really using Linux with Ubuntu 6.06 and was drawn in by its "Linux for human beings" goals - the Ubuntu homepage of the era really pushed the ideals of community and openness. Canonical sat in the background paying to send you free CDs in the mail. It was such an idealistic thing back then.

And then it all changed around 2010. The color scheme shifted to a shitty MacOS lookalike, the human elements were dropped, the logo was reworked, it got bundled with a paid music store, then Amazon ads in the search, and it's been a roller coaster on a downward spiral ever since. I switched to Debian not long after the initial enshittification in the early 2010s and have not looked back, though I moved most of my systems to Arch a few years back because I like life in the fast rolling release lane and Debian wouldn't support my new GPUs.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

CentOS good (after they betrayed open source) but Debian bad (even though they remain one of the more independent from corporate influence distros and also serve as the upstream for over half the list)? What even is this nonsense? I agree Ubuntu and its official derivatives maliciously bad and Manjaro completely pointless but that's about all I agree with.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I would say we're beyond the era of PC referencing the classic "x86 IBM Personal Computer compatible" definition. PC could reasonably be considered to include many ARM systems, considering there are now Windows laptops shipping with ARM processors that can run "PC" software. Besides, most new x86 PCs aren't IBM PC compatible anyways as legacy BIOS support has been dropped by a lot of UEFI implementations. I would consider any device that runs a desktop style OS (be it Windows, Linux, or even MacOS) a PC. The distinction in my mind is specifically mobile vs. desktop. Android and iOS are not PC. They're primarily touch driven and apps are restricted to a certain format with a centralized app store where you are expected to get all of your apps. Windows/Linux/MacOS are primarily keyboard and mouse driven and you have a lot more flexibility on acquiring new apps, with their forms of "sideloading" and "rooting/jailbreaking" being things that are just normal and accepted rather than workarounds/hacks to break out of the walled garden. I would also go as far as saying a smartphone can be a PC if you have a PC like OS on it, such as mobile Linux OSes that let you run desktop applications.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

As a user and not as a government agent, why should I care? If anything, having a foreign government hoard my data and spy on me is better than the government that actually has jurisdiction over me. If I were posting things critical of my own government I would rather have a foreign government hoard that data than my own government. There's a lot more of a chance that US data hoarding leads to action against US citizens than Chinese data hoarding.

I don't see how this benefits average Americans in any way. This helps the government and corporations.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

The domestic social media companies are at the whims of the billionaire class which I would argue is just as bad for voter influence. Neither side wants you to vote in your best interest.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Hopefully more cooperating with than competing against. If NVK is good, Linux users will buy more NVIDIA cards. I don't see NVIDIA being too opposed to that. Also, if you look at the Mesa merge requests for NVK, there have been a few with @nvidia.com emails. At least a few NVIDIA people are following and contributing even if only very little (one MR I saw was regarding an unknown bit that turned out to be an NVIDIA-internal test environment flag). Also, NVIDIA hired the former nouveau kernel-side maintainer and he just published a large nouveau patch set. I really hope we're seeing NVIDIA move towards acceptance of the open driver stack even if they continue to develop and push their proprietary one. Given their focus on AI and compute maybe they see letting Mesa handle graphics as less of a concern now. Maybe they want to get everything running on an upstreamable kernelspace driver. Who knows, but it's definitely looking better than it ever has for them.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Yeah, the lack of proper discoverability on i2c truly sucks. You have to just poke random addresses and hope for the best to see if an i2c device exists on the bus. It's a great standard but I wish it would get updated with some sort of plug and play autodetection feature. Standardized device PID/VID system like USB and PCI would be acceptable or a standardized register that returns a part string. Anything other than blindly poking registers and hoping you're not accidentally overvolting the CPU or whatever because the register on your expected device overlaps with the overvolt the CPU register on the same address of a different device.

CalcProgrammer1 , (edited )
@CalcProgrammer1@lemmy.ml avatar

Except that in the case of VGA (and DVI, HDMI, and DisplayPort) the i2c interface is intended for use over the cable. All of those ports have a pair of i2c pins and corresponding wires in their cables. The i2c interface is used for DDC/EDID which is how the computer can identify the capabilities and specifications of the attached display. DDC even provides some rarely-used control functionality. Probably the most useful of which is being able to control the brightness of the display from software. I use the ddcci module on Linux and it lets me control my desktop monitor brightness the same way a laptop would, which is great. I have no idea why this isn't widely used.

Edit:

This i2c interface is widely used to control the lighting on modern graphics cards that have RGB lighting. We've spent a lot of time reverse engineering these chips and their i2c protocols for OpenRGB. GPU chips usually have more i2c buses than the cards have display connectors, so the RGB chip is wired to one of the unused buses. I think AMD GPUs tend to have 8 separate i2c buses but most cards only use 4 or 5 of them for display connectors. There is also an i2c interface present on RAM slots normally used for reading the SPD chip that stores RAM module specifications, timings, etc. This interface is also used for RAM modules with controllable RGB lighting.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Squeekboard is where it's at. By far my favorite onscreen keyboard for Linux and mainly because you can easily create your own layouts using .yaml files. I'm tired of virtual keyboards that omit keys needed for development and terminal use or shove them off to separate tabs. My custom Squeekboard layout fits my needs exactly and I'm pretty fast at typing on it (typing this on it now). I wish it were usable outside of Phosh, though tbf I haven't tried. Between GNOME Mobile, KDE Plasma Mobile, and Phosh (Squeekboard), I choose Phosh primarily because of how much I like Squeekboard.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Watched this the other day, great documentary! I played Oregon Trail 2 in school in the 90's and we ended up getting it for our home PC. Nice to learn the history behind the game in such detail.

Recommendations for an authentic mousepad?

Hiya peeps, my mouse mat has served its time, and it is time for a new one. Until now, I've had one of these huge ones, that cover half the desk. But I think this time I'm gonna aim for a normal-sized one. I'd also like to avoid cheap Chinese wares or anything low-quality. Anyone got any nice recommendations for mousepads? Do...

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I use a HYTE CNVS deskpad and a Razer Firefly hard surface mousemat. I've found I prefer hard surface mats over cloth ones. I also really like the Razer Mamba Hyperflux, but they don't make it anymore. It's a Firefly mouse mat that wirelessly powers the included mouse and it's a really neat design, though doesn't work well if your desk has metal supports under where the mousemat goes. For that reason I use it at work, not at my gaming setup.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I'm cautiously optimistic. While I could see NVIDIA hiring him to stifle nouveau development, it doesn't really seem worth it when he already quit as maintainer and Red Hat is already working on nova, a replacement for nouveau. I got into Linux with Ubuntu 6.06 and remember the situation then. NVIDIA and ATI both had proprietary drivers and little open source support, at least for their most recent chipsets of the time. I was planning on building a new PC and going with an NVIDIA card because ATI's drivers were the hottest of garbage and I had a dreadful experience going from a GeForce 4 MX420 to a Radeon X1600Pro. However, when AMD acquired ATI they released a bunch of documentation. They didn't immediately start paying people to write FOSS Radeon drivers, but the community (including third party commercial contributors) started writing drivers from these documents. Radeon support quickly got way better. Only after there was a good foundation in place do I remember seeing news about official AMD funded contributors to the Mesa drivers. I hope that's what we're now seeing with NVIDIA. They released "documentation" in the form of their open kernel modules for their proprietary userspace as well as reworking features into GSP to make them easier to access, and now that the community supported driver is maturing the see it viable enough to directly contribute to.

I think the same may have happened with Freedreno and Panfrost projects too.

This is my cautious optimism here. I hope they follow this path like the others and not use this to stifle the nouveau project. Besides, stifling one nouveau dev would mean no other nouveau/nova/mesa devs would accept future offers from them. They can't shut down the open driver at this point, and the GSP changes seem like they purposely enabled this work to begin with. They could've just kept the firmware locked down and nouveau would've stayed essentially dead indefinitely.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I don't really see why they would hire him to achieve this goal. He had already quit as maintainer. He was out of the picture unless he resigned specifically due to accepting an offer from NVIDIA, but if that was the case and they wanted Nouveau stopped then why is he now contributing a huge patchset? If they hired him and he quit nouveau they could've had him work on the proprietary driver or their own open out of tree kernel driver, but they specifically had him (or at least allowed him) to keep working on nouveau.

Also, if they really wanted to EEE nouveau into oblivion, they would need to get every single prominent nouveau, nova, and NVK developer on payroll simultaneously before they silence them all because once one gets silenced why would any of the others even consider an NVIDIA offer? Especially those already employed at Red Hat? It doesn't really make sense to me as an EEE tactic.

What has been apparent over the past few years is that NVIDIA seems to be relaxing their iron grip on their hardware. They were the only ones who could enable reclocking in such a way that it would be available to a theoretical open source driver and they did exactly that. They moved the functionality they wanted to keep hidden into firmware. They had to have known that doing this would enable nouveau to use it too.

Also, they're hopping on this bandwagon now that NVK is showing promise of being a truly viable gaming and general purpose use driver. Looking at the AMD side of things, they did the same thing back when they first started supporting Mesa directly. They released some documentation, let the community get a minimally viable driver working, and then poured official resources into making it better. I believe the same situation happened with the Freedreno driver, with Qualcomm eventually contributing patches officially. ARM also announced their support of the Panfrost driver for non-Android Linux use cases only after it had been functionally viable for some time. Maybe it's a case of "if you can't beat them, join them" but we've seen companies eventually start helping out on open drivers only after dragging their feet for years several times before.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I mean, the open source driver already is out. The nouveau driver has been in the kernel for like a decade now. The userspace part has been in Mesa for just as long, though largely was unused due to nouveau not being able to use high clock speeds. That isn't the case anymore, and since the beginning of the year you've been able to test drive the new NVK Vulkan driver on nouveau with GSP enabled to get actually reasonable performance in several select games. NVIDIA isn't creating a new driver, they're contributing to one that already exists. Since this particular patch set is so huge I don't know it will make it into the next kernel release right away but this guy was the former nouveau maintainer, I expect he knows the necessary standards to get his code accepted.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Fuck Riot. Never playing their games again. If you're going to have a shitty anticheat at least give people the option to play in anticheat disabled lobbies. Besides, they should be doing anticheat at the server level not spying on the boot sequence of client PCs. That shit is unnecessary for a fucking banking app let alone a goddamn game. It's just a game, let us enjoy it rather than making such a ridiculously over the top response to cheating.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Yeah, this headline is stupid ragebait. RISC-V development board company chooses RISC-V chip for their latest RISC-V development board doesn't have the same level of nonsensical anti-China rage in it.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

It's not just 32 on 64 bit, new Macs use ARM64 processors so x86/x86_64 code is effectively obsolete on Mac. I would love to see Valve pour resources into a cross platform x86 on ARM64 emulation layer though, it would benefit Linux as well.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Most gaming laptops these days don't do GPU switching anyways. They do render offloading, where the laptop display is permanently connected to the integrated GPU only. When you want to use the discrete GPU to play a game, it renders the game frames into a framebuffer on the discrete GPU and then copies the completed frame over PCIe into a framebuffer on the iGPU to then output it to the display. On Linux (Mesa), this feature is known as PRIME. If you have two GPUs and you do DRI_PRIME=1 <command>, it will run the command on the second GPU, at least for OpenGL applications. Vulkan seems to default to the discrete GPU no matter what. My laptop has an AMD iGPU and an NVIDIA dGPU and I've been testing the new NVK Mesa driver. Render offloading seems to work as expected. I would assume the AMD Mesa driver would work just as well for render offloading in a dual AMD situation.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I think it's the other way around. NVIDIA's marketing name for render offloading (muxless) GPU laptops is NVIDIA Optimus so when the Mesa people were creating the open source version they called it PRIME.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Most gaming laptops these days don't support true GPU switching as it requires a hardware mux to switch the display between the GPUs. Every gaming laptop I've used from the past decade has been muxless and only used render offloading.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Most of the laptops I've seen the external port is connected to the dGPU.

Explicit sync Wayland protocol has finally been merged! ( gitlab.freedesktop.org )

Since nvidia drivers do not properly implement implicit sync, this protocol not existing is the root cause of flickering with nvidia graphics on Wayland. This MR being merged means that Wayland might finally be usable with nvidia graphics with the next driver release....

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

The AMD radv driver is best for gaming at the moment IMO. If you're stuck with NVIDIA hardware then yes, the proprietary driver is the best for gaming as the open source driver is quite slow, but the good news is that this is rapidly changing after being stagnant for 5+ years. NVK is the new open source NVIDIA Vulkan driver in Mesa and it just recently left experimental to be included officially in the next Mesa release. Also, NVIDIA's GSP firmware changes mean that the open source nouveau kernel driver can finally reclock NVIDIA GPUs to high performance clocks/power states thus it could achieve performance parity with the proprietary driver with enough optimization. On my RTX 3070 laptop it is still significantly slower and some games don't work yet, but there is no flickering or tearing that I experience with the proprietary driver. Unfortunately for GTX 10 series users, these cards do not use GSP firmware and have no means of reclocking still so they will be stuck using only proprietary drivers for the forseeable future.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

VRR has landed!!!!!!!!

Can't wait to try out the official version of GNOME VRR after using the patched mutter-vrr for several years now. It's a very solid VRR implementation and I feel it's better than KDE's. It's about time it made it into an actual GNOME release. Just wish they would've fully committed and added the VRR toggle in settings rather than hide it behind an experimental flag. Hopefully GNOME 47 moves it out of experimental.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I'm not sure. I don't know how or when DSC gets used. My new monitor is a 4K 144Hz display connected over DisplayPort and my GPU is a Radeon RX 7800XT. I don't think DSC is being used in this setup but I don't know for sure. I also used this display with an Arc A770 and GNOME VRR worked just fine there too, though I had to comment out a line in a udev rule that excluded VRR support on Intel GPUs for some reason.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Not on my 1080Ti. I have serious flickering on certain apps when using the latest NVIDIA proprietary drivers on Arch Linux with GNOME Wayland. Steam flickers and sometimes seems to fail to redraw properly. Had some issues on Discord as well.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Cloud gaming is a plague. More fuel for the "you will own nothing and be happy" camp. Let it die. GeForce Now was at least one of the better options since you just use their servers to play games from your owned library, but the whole concept is a plague nonetheless. Let streaming nonsense die. Streaming from your own PC is the only streaming solution that doesn't exist to weaken consumer ownership of their gaming experience.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Nice review. I agree with others here that this phone is borderline scam for the price and with all the delays people had in receiving them. Performance seems on par with the $200 original PinePhone which I had a similar experience with.

The one good thing that came out of Purism/Librem 5 is Phosh. It's a pretty good phone shell/UI for other more capable Linux phones to use. I particularly like Phosh for its on-screen keyboard Squeekboard which allows for custom keymaps.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

AOSP and even factory kernel source tends to be only mildly useful for proper Linux phone use. Android phones tend to ship with old kernel revisions that the chip maker forked a long time ago and developed their chip drovers on without following accepted kernel conventions or submitting any code to the actual kernel maintainers for proper review and integration into the most up to date "mainline" kernel. Due to this, and the fact that phone makers need to constantly ship new products out the door, the quality of this code added into the old kernel is often garbage, poorly commented and with no documentation. Usually no git history either.

There are other teams of people trying to clean up and/or rewrite these drivers from scratch in a way that is reviewable and acceptable in mainline. Only a small handful of the vast number of phone chips have such support, so proper Linux phone is limited to a small selection of hardware. The designed-for-Linux librem and PinePhone models intentionally chose old chipsets because these chipsets had good mainline support and thus could receive actual kernel updates rather than being stuck forever on an ancient kernel release from the manufacturer that has long since been abandoned.

Lately the Qualcomm Snapdragon SDM845 chip is seeing growing mainline Linux support and quickly becoming one of the most viable chips for mobile Linux that isn't a complete dinosaur in terms of performance and power draw. The OnePlus 6 and 6T, which both use the SDM845 chip, have become quite popular as Linux phones now despite not yet having VoLTE and thus being useless for calls. I carry a OnePlus 6T as a secondary non-phone pocket PC because the Linux experience is very good other than the lack of phone and camera functionality. It's fast and can do all my terminal and coding stuff as well as run full fledged web browsers well.

How performant are the Intel Arc GPUs in linux?

I saw the other day about the new video of Hardware Unboxed where they benchmarked the Intel GPUs with newer drivers on Windows. I'm also interested in buying one but I'd like to know how good they are on Linux. Since the GPUs will be using Vulkan renderer on Linux, I was hoping they would be better overall, or rather have a...

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I've had an A770 Limited Edition since its release in late 2022. Overall, I'm happy with it. The drivers were a mess at launch but now everything works as expected. Performance is decent in the games I play, though I have a 144Hz 4K monitor and it's not really capable of that resolution and refresh rate except on the lightest esports games so I use FSR on most games. My most played game is Overwatch and it hits 144Hz with dynamic resolution scaling on and medium settings. I want to buy a higher end GPU eventually to really push this monitor but waiting to see what happens with the next generation of Intel and AMD cards (NVIDIA is not even in the running unless NVK suddenly gets performance parity with the proprietary drivers).

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Steam Deck is an open platform because you can run any OS, launcher, etc. on it. It's just a handheld PC. Steam itself is a closed ecosystem but the Deck is very open.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

GitLab used to be awesome when it was the place to go after MS bought out GitHub. They had premium access for all public projects under a FOSS license and top-tier CI. Then as time went on, they began pulling support for various functions in a very Microsoftian EEE sort of way. First requiring credit cards fir new users to access the CI, then taking away the CI almost entirely except for a practically useless monthly allotment, then taking away the premium access for public FOSS licensed projects. If I were migrating today I would not have chosen GitLab, but it is where I settled after leaving GitHub and my projects have grown to depend on GitLab CI even if I'm now forced to run my own runners due to the extreme nerfs they've done to the hosted CI. I mirrored OpenRGB to Codeberg, but since the CI pipelines depend on GitLab I don't see Codeberg becoming the main hub anytime soon unless they can execute GL CI configs. Sad to see how far GitLab has fallen though, it is unrecognizable from what it used to be as far as support for FOSS prohects goes, especially given how GitLab itself started as a FOSS project.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I still left my old and unmaintained projects on GitHub but I moved all my active projects to GitLab and any new projects go there too. I have them auto mirrored back to GitHub though as the more mirrors the better. I also recently set up a Codeberg mirror for some of my projects, though GitLab's CI is what is keeping me on GitLab even though they nerfed the shit out of it and made it basically a requirement to host your own runners even for FOSS projects a year or two back. Still hate them for that and if Codeberg gets a solid CI option, leaving GitLab would make me happy. They too have seen quite a lot of enshittification in the years since Microsoft bought GitHub.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Drastically nerfed the quotas. FOSS projects with a valid license used to have GitLab Premium access to shared runners and now even FOSS projects with a valid license get a rather useless 400 minutes. They also require new accounts to add CC info just to use that paltry sum which means FOSS projects can't rely on CI passing on forks to ensure a merge request passes the checks before merging, as even if you have project specific runners set up forks don't use them and neither to MRs.

I wish companies didn't offer what they can't support from the beginning rather than this embrace, extend, extinguish shit. I guess in GitLab's case there was no extend, it was just embrace FOSS projects and let them set up CI pipelines and get projects depending on the shared CI runners as part of merge request workflow for a few years and then extinguish by yoinking that access away and fucking over everyone's workflow, leaving us scrambling to set up project side runners and ruining checks on MRs.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

The stupid thing is mutter-vrr works far better than Plasma's implementation in my experience. Plasma locks refresh rate to max if your cursor is moving, causing games that use the cursor to stutter badly while the mutter implementation refreshes the cursor at the game's rate as expected.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Interesting, though I question why a battery backed RTC is seen as so critically important. Of all the features I can think of wanting in a router, a battery backed RTC doesn't even begin to make the cut. A device that is powered up 24/7 and connected to the Internet can just get NTP time whenever it boots up and keep time using the OS. What is so necessary about an RTC here? I get that time is used for certificate verification and other security stuff, but again NTP and always powered. Are they concerned that NTP could be an attack vector?

I'm interested in a new OpenWRT router as my WRT1900ACS is getting older and the WiFi driver on it never had amazing support. Right now the Banana Pi R4 looks promising as a WiFi 7 OpenWRT supported router as it looks like most off the shelf WiFi 7 routers do not have OpenWRT support.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I've used Raspberry Pis since the first model came out and other SBCs and the lack of RTC has never really been an issue. The Pi syncs time by the time it makes it to the desktop. I can see it being useful for early boot timestamps but the most useful such log (dmesg) is just elapsed time since power on anyways. I can also see it being useful for devices doing data logging without Internet or regular power supply like a remote sensor logging device. I guess I just don't see it as a crucial component of a home router. I agree it's a cheap and useful addition though, just not maybe the most essential of one.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Does this change run the 32-bit .exe using x86_64 instructions? From the description it just sounds like it allows 64-bit Linux libraries to be used in place of 32-bit ones, but that the Windows layer still operates in native 32-bit mode. This means there is still a need to emulate 32-bit x86 instructions which I don't think box64 can do at this time (x86_32 translates to arm32 with box86, x86_64 translates to arm64 with box64). If box86 could translate x86_32 to arm64 then this might work as Wine would handle the conversion between 32 and 64 bit addressing and argument passing into the libraries but I'm not familiar with the inner workings there.

The 6.7 kernel has been released ( lwn.net )

Some of the headline features in this release are: the removal of support for the Itanium architecture, the first part of the futex2 API, futex support in io_uring, the BPF exceptions mechanism, the bcachefs filesystem, the TCP authentication option, the kernel samepage merging smart scan mode, and networking support for the...

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

This should allow nouveau to reclock NVIDIA 2xxx and newer GPUs. Huge step forward for open source NVIDIA drovers and I've been testing this on my laptop for a few weeks now woth the rc kernels and the NVK driver and it's pretty impressive so far.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Performance is pretty good with my Arc A770 on Arch Linux. It’s had some growing pains but they’re pretty much all resolved now.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

Also Brodie’s podcast Tech Over Tea. I was on the podcast so I’m a bit biased, but he has a lot of open source developers from different projects on and they are always interesting.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I got to be on the Tech Over Tea podcast! I really enjoyed talking with Brodie and would definitely recommend his main channel as well as Tech Over Tea. There is another podcast I sometimes watch called Linux Game Cast too.

CalcProgrammer1 ,
@CalcProgrammer1@lemmy.ml avatar

I’m not sure you can go more than 320kb/s on mp3. I have my music collection on my home server in FLAC but I transcode to 320kbps constant bitrate mp3 for my car and phone. I chose 320 because it’s the highest that I’ve seen mp3 converters able to go.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines