@Max_P@lemmy.max-p.me avatar

Max_P

@Max_P@lemmy.max-p.me

Just some Internet guy

He/him/them 🏳️‍🌈

This profile is from a federated server and may be incomplete. View on remote instance

Max_P ,
@Max_P@lemmy.max-p.me avatar

Easiest for this might be NextCloud. Import all the files into it, then you can get the NextCloud client to download or cache the files you plan on needing with you.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I'd say mostly because the client is fairly good and works about the way people expect it to work.

It sounds very much like a DropBox/Google Drive kind of use case and from a user perspective it does exactly that, and it's not Linux-specific either. I use mine to share my KeePass database among other things. The app is available on just about any platform as well.

Yeah NextCloud is a joke in how complex it is, but you can hide it all away using their all in one Docker/Podman container. Still much easier than getting into bcachefs over usbip and other things I've seen in this thread.

Ultimately I don't think there are many tools that can handle caching, downloads, going offline, reconcile differences when back online, in a friendly package. I looked and there's a page on Oracle's website about a CacheFS but that might be enterprise only, there's catfs in Rust but it's alpha, and can't work without the backing filesystem for metadata.

Did I just solve the packaging problem? (please feel free to tell me why I'm wrong)

You know what I just realised? These "universal formats" were created to make it easier for developers to package software for Linux, and there just so happens to be this thing called the Open Build Service by OpenSUSE, which allows you to package for Debian and Ubuntu (deb), Fedora and RHEL (rpm) and SUSE and OpenSUSE (also...

Max_P ,
@Max_P@lemmy.max-p.me avatar

The problem is that you can't just convert a deb to rpm or whatever. Well you can and it usually does work, but not always. Tools for that have existed for a long time, and there's plenty of packages in the AUR that just repacks a deb, usually proprietary software, sometimes with bundled hacks to make it run.

There's no guarantee that the libraries of a given distro are at all compatible with the ones of another. For example, Alpine and Void use musl while most others use glibc. These are not binary compatible at all. That deb will never run on Alpine, you need to recompile the whole thing against musl.

What makes a distro a distro is their choice of package manager, the way of handling dependencies, compile flags, package splitting, enabled feature sets, and so on. If everyone used the same binaries for compatibility we wouldn't have distros, we would have a single distro like Windows but open-source but heaven forbid anyone dares switching the compiler flags so it runs 0.5% faster on their brand new CPU.

The Flatpak approach is really more like "fine we'll just ship a whole Fedora-lite base system with the apps". Snaps are similar but they use Ubuntu bases instead (obviously). It's solving a UX problem, using a particular solution, but it's not the solution. It's a nice tool to have so developers can ship a reference environment in which the software is known to run well into and users that just want it to work can use those. But the demand for native packages will never go away, and people will still do it for fun. That's the nature of open-source. It's what makes distros like NixOS, Void, Alpine, Gentoo possible: everyone can try a different way of doing things, for different usecases.

If we can even call it a "problem". It's my distro's job to package the software, not the developer's. That's how distros work, that's what they signed up for by making a distro. To take Alpine again for example, they compile all their packages against musl instead of glibc, and it works great for them. That shouldn't become the developer's problem to care what kind of libc their software is compiled against. Using a Flatpak in this case just bypasses Alpine and musl entirely because it's gonna use glibc from the Fedora base system layer. Are you really running Alpine and musl at that point?

And this is without even touching the different architectures. Some distros were faster to adopt ARM than others for example. Some people run desktop apps on PowerPC like old Macs. Fine you add those to the builds and now someone wants a RISC-V build, and a MIPS build.

There are just way too many possibilities to ever end up with an universal platform that fits everyone's needs. And that's fine, that's precisely why developers ship source code not binaries.

Max_P ,
@Max_P@lemmy.max-p.me avatar

If you want FRP, why not just install FRP? It even has a LuCI app to control it from what it looks like.

OpenWRT page showing the availability of FRP as an app

NGINX is also available at a mere 1kb in size for the slim version, full version also available as well as HAproxy. Those will have you more than covered, and support SSL.

Looks like there's also acme.sh support, with a matching LuCI app that can handle your SSL certificate situation as well.

Max_P ,
@Max_P@lemmy.max-p.me avatar

No but it does solve people not wanting to bother making an account for your effectively single-user self-hosted instance just to open a PR. I could be up and running in like 10 minutes to install Forgejo or Gitea, but who wants to make an account on my server. But GitHub, practically everyone has an account.

Max_P ,
@Max_P@lemmy.max-p.me avatar

There's been a general trend towards self-hosted GitLab instances in some projects:

Small projects tend to not want to spin up infrastructure, but on GitHub you know your code will still be there 10 years later after you disappear. The same cannot be said of my Cogs instance and whatever was on it.

And overall, GitHub has been pretty good to users. No ads, free, pretty speedy, and a huge community of users that already have an account where they can just PR your repo. Nobody wants to make an account on some random dude's instance just to open a PR.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The whole point is you can take the setup and maintenance time out of the equation, it's still not very appealing for the reasons outlined.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Most VoIP providers have either an HTTP API you can hit and/or email to/from text.

Additionally, some carriers do offer an email address that can be used to send a text to one of their users but due to spam it's usually pretty restricted.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Example of what?

VoIP provider: voip.ms

They support like 5 different ways to deal with SMS and MMS, there's options. https://wiki.voip.ms/article/SMS-MMS

Carrier that accepts texts by email: Bell Canada accepts emails at NUMBER@txt.bell.ca and deliver it as SMS or MMS to the number. Or at least they used to, I can't find current documentation about it and that feels like something that would be way too exploitable for spam.

How much does it matter what type of harddisk i buy for my server?

Hello, I'm relatively new to self-hosting and recently started using Unraid, which I find fantastic! I'm now considering upgrading my storage capacity by purchasing either an 8TB or 10TB hard drive. I'm exploring both new and used options to find the best deal. However, I've noticed that prices vary based on the specific...

Max_P ,
@Max_P@lemmy.max-p.me avatar

The concern for the specific disk technology is usually around the use case. For example, surveillance drives you expect to be able to continuously write to 24/7 but not at crazy high speeds, maybe you can expect slow seek times or whatever. Gaming drives I would assume are disposable and just good value for storage size as you can just redownload your steam games. A NAS drive will be a little bit more expensive because it's assumed to be for backups and data storage.

That said in all cases if you use them with proper redundancy like RAIDZ or RAID1 (bleh) it's kind of whatever, you just replace them as they die. They'll all do the same, just not with quite the same performance profile.

Things you can check are seek times / latency, throughput both on sequential and random access, and estimated lifespan.

I keep hearing good things about decomissioned HGST enterprise drives on eBay, they're really cheap.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I mean, OPs distro choice didn't help here:

EndeavourOS is an Arch-based distro that provides an Arch experience without the hassle of installing it manually for x86_64 machines. After installation, you’re provided with a lightweight and almost bare-bones environment ready to be explored with your terminal, along with our home-built Welcome App as a powerful guide to help you along.

If you want Arch with actual training wheels you probably want Manjaro or at least a SteamOS fork like Chimera/HoloISO.

It probably would have been much smoother with an actual beginner friendly distro like Nobara and Bazzite, or possibly Mint/Pop for a more classic desktop experience.

It's not perfect and still has woes but OP fell for Arch with a fancy graphical installer, it still comes with the expectation of the user being able to maintain an Arch install.

Max_P ,
@Max_P@lemmy.max-p.me avatar

EndeavourOS isn't a gaming distro it's just an Arch installer with some defaults. It's still Arch and comes with Arch's woes. It's not a beginner friendly just works kind of distro.

Coming from kionite, you'd probably want Bazzite if you want a gaming distro: it's also Fedora atomic with all the gaming stuff added.

Predatory forcing of circular dependency?

I think ---DOCKER--- is doing this. I installed based, and userspace(7)-pilled liblxc and libvirt and then this asshole inserted a dependency when I tried to install from their Debian package with sudo dpkg -i. One of them was qemu-system, the other was docker-cli because they were forcing me to use Docker-Desktop, which I would...

Max_P ,
@Max_P@lemmy.max-p.me avatar

I don't have an answer as to what happened, I checked the script and it looks sane to me, it installs the docker-ce package which should be the open-source community version as one would expect.

Maybe check what the package depends on and see if it pulls in all of that. Even qemu is a bit weird, it makes sense for docker-machine but I expect that to be a different package anyway. I guess Docker Desktop probably does use it, that way they can make it work the same on all platforms which is kind of dumb to do on Linux.

But,

Why don't we all use LXC and ditch this piece of shit?

Try out Podman. It's mostly a drop-in replacement for Docker, daemonless, rootless and less magical.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I must be lucky, works just fine for me with SDDM configured for Wayland only, autologin to a Wayland session.

max-p@media ~ % cat /etc/sddm.conf
[Autologin]
User=max-p
Session=plasma
#Session=plasma-bigscreen
Relogin=true

[General]
DisplayServer=wayland
Max_P ,
@Max_P@lemmy.max-p.me avatar

Arch. That leads me to believe it's possibly a configuration issue. Mine is pretty barebones, it's literally just that one file.

AFAIK the ones in sddm.conf.d are for useful because the GUI can focus on just one file without nuking other user's configurations. But they all get loaded so it shouldn't matter.

The linked bug report seems to blame PAM modules, kwallet in particular which I don't think I've got configured for unlock at login since there's no password to that account in the first place.

[Thread, post or comment was deleted by the author]

  • Loading...
  • Max_P ,
    @Max_P@lemmy.max-p.me avatar

    ActivityPub makes this impossible. Everything on the fediverse is completely public, including votes, subscriptions and usernames. Even if Lemmy did offer the option, other servers wouldn't necessarily.

    And honestly this is a system that would be mainly used for spam and hate speech anyway. Just make a throwaway like everywhere else.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Kbin is an example. But just due to the nature of the protocol, it has to be stored somewhere but Lemmy also just lets admins view all the individual votes directly in the UI.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Still report as well, it sends emails to the mods and the admins. Just make sure it's identifiable at a glance, like just type "CSAM" or whatever 1-2 words makes sense. You can add details after to explain but it needs to be obvious at a glance, and also mods/admins can send those to a special priority inbox to address it as fast as possible. Having those reports show up directly in Lemmy makes it quicker to action or do bulk actions when there's a lot of spam.

    It's also good to report it directly into the Lemmy admin chat on Matrix as well afterwards, because in case of CSAM, everyone wants to delete it from their instance ASAP in case it takes time for the originating instance to delete it.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    That's fine to do once you've reported it: you've done your part, there's no value still seeing the post it's gonna get removed anyway.

    Self-hosted website for posting web novel/fiction

    Hey hello, self-hosting noob here. I just want to know if anyone would know a good way to host my writing. Something akin to those webcomic sites, except for writing. Multiple stories with their own "sections" (?) and a chapter selection for each. Maybe a home page or profile page to just briefly detail myself or whatever, I...

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Wordpress or some of its alternatives would probably work well for this. Another alternative would be static site generators, where you pretty much just write the content in Markdown.

    It's also a pretty simple project, it would be a great project to learn basic web development as well.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    I route through my server or my home router when using public WiFi and stuff. I don't care too much about the privacy aspect, my real identity is attached to my server and domain anyway. I even have rDNS configured, there's no hiding who the IP belongs to.

    That said, server providers are much less likely to analyze your traffic because that'd be a big no-no for a lot of companies using those servers. And of course any given request may actually be from any of Lemmy, Mastodon, IRC bots or Matrix, so pings to weird sites can result entirely from someone posting that link somewhere.

    And it does have the advantage that if you try to DDoS that IP you'll be very unsuccessful.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    I can definitely see the improvement, even just between my desktop monitor (27in 1440p) and the same resolution at 16 inch on my laptop. Text is very nice and sharp. I'm definitely looking at 4K or even 5K next monitor upgrade cycke.

    But the improvement is far from how much of an upgrade 480p to 1080p and moving away from CRTs to flat screens. 1080p was a huge thing when I was in highschool as CRT TVs were being phased out in favor of those new TVs.

    For media I think 1080p is good enough. I've never gone "shit, I only downloaded the 1080p version". I like 4K when I can have it like on YouTube and Netflix, but 1080p is still a quite respectable resolution otherwise. The main reason to go higher resolutions for me is text. I'm happy with FSR to upscale the games from 1080p to 1440p for slightly better FPS.

    HDR is interesting and might be what convinces people to upgrade from 1080p. On a good TV it feels like more of an upgrade than 4K does.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    I've actually ran into some of those problems. If you run sudo su --login someuser, it's still part of your user's process group and session. With run0 that would actually give you a shell equivalent to as if you logged in locally, and manage user units, all the PAM modules.

    systemd-run can do a lot of stuff, basically anything you can possibly do in a systemd unit, which is basically every property you can set on a process. Processor affinity, memory limits, cgroups, capabilities, NUMA node binding, namespaces, everything.

    I'm not sure I would adopt run0 as my goto since if D-Bus is hosed you're really locked out and stuck. But it's got its uses, and it's just a symlink, it's basically free so its existence is kBs of bloat at most. There's always good ol su when you're really stuck.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Basically, the SUID bit makes a program get the permissions of the owner when executed. If you set /bin/bash as SUID, suddenly every bash shell would be a root shell, kind of. Processes on Linux have a real user ID, an effective user ID, and also a saved user ID that can be used to temporarily drop privileges and gain them back again later.

    So tools like sudo and doas use this mechanism to temporarily become root, then run checks to make sure you're allowed to use sudo, then run your command. But that process is still in your user's session and process group, and you're still its real user ID. If anything goes wrong between sudo being root and checking permissions, that can lead to a root shell when you weren't supposed to, and you have a root exploit. Sudo is entirely responsible for cleaning the environment before launching the child process so that it's safe.

    Run0/systemd-run acts more like an API client. The client, running as your user, asks systemd to create a process and give you its inputs and outputs, which then creates it on your behalf on a clean process tree completely separate from your user session's process tree and group. The client never ever gets permissions, never has to check for the permissions, it's systemd that does over D-Bus through PolKit which are both isolated and unprivileged services. So there's no dangerous code running anywhere to exploit to gain privileges. And it makes run0 very non-special and boring in the process, it really does practically nothing. Want to make your own in Python? You can, safely and quite easily. Any app can easily integrate sudo functionnality fairly safely, and it'll even trigger the DE's elevated permission prompt, which is a separate process so you can grant sudo access to an app without it being able to know about your password.

    Run0 takes care of interpreting what you want to do, D-Bus passes the message around, PolKit adds its stamp of approval to it, systemd takes care of spawning of the process and only the spawning of the process. Every bit does its job in isolation from the others so it's hard to exploit.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    I haven't had D-Bus problems in quite a while but actually run0 should help with some of those issues. Like, systemctl --user will actually work when used with run0, or at least systemd-run can.

    Haven't used it yet so it's all theoretical, but it makes sense to me especially at work. I've used systemd-run to run processes in very precise contexts, it's worth using even if just to smush together schedtool, numactl, nice, taskset and sudo in one command and one syntax. Anything a systemd unit can do, systemd-run and run0 can do as well.

    I'm definitely going to keep su around just in case because I will break it the same I've broken sudo a few times, but I might give it a shot and see if it's any good just for funsies.

    Just trying to explain what it does and what it can do as accurately as possible, because out of context "systemd adds sudo clone" people immediately jump to conclusions. It might not be the best idea in the end but it's also worth exploring.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Some executables are special. When you run them, they automagically run as root instead! But if sudo isn't very, very careful, you can trick it into letting you run things as root that you shouldn't be able to.

    Run0 DM's systemd asking it to go fork a process as root for you, and serves as the middleman between you and the other process.

    Max_P , (edited )
    @Max_P@lemmy.max-p.me avatar

    If you dig deeper into systemd, it's not all that far off the Unix philosophy either. Some people seem to think the entirety of systemd runs as PID1, but it really only spawns and tracks processes. Most systemd components are separate processes that focus on their own thing, like journald and log management. It's kinda nice that they all work very similarly, it makes for a nice clean integrated experience.

    Because it all lives in one repo doesn't mean it makes one big fat binary that runs as PID1 and does everything.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Yeah, even Asahi has better OpenGL support than real macOS. They make damn sure you have to use Metal to get the most out of it, just like eventually you get caught up in DirectX on Windows whether you want it or not. You can use Vulkan and OpenGL, but the OS really wants to work with Metal/DirectX buffers in the end.

    I appreciate that the devs care enough to make it really good from the start, because that sets the benchmark. Now the Linux version has to have a similar enough polish to it.

    In comparison, Atom and VSCode both worked fine on Linux just about day one thanks to Electron, but it was also widely disliked for the poor performance. It's a part of what Zed competes on, performance compared to VSCode.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    The guy that manages Kbin has been having personal issues and stepped away from the fediverse so yeah Kbin is kind of in limbo at the moment and indeed not well moderated. There's mods but there's just so much they can do. The software doesn't federate the deletions so even if they're gone on Kbin, they remain everywhere else.

    Max_P , (edited )
    @Max_P@lemmy.max-p.me avatar

    On my computer that'd unmount my home directory, my external storage, my scratch space and my backup storage, and my NAS.

    It would also unmount /sys and /proc and /tmp and /run. Things can get weird fast without those, for example that's where the Xorg/Wayland socket is located.

    If all you have is home and root on the same partition I guess it's not too bad because it's guaranteed to be in use so it won't let you, but still, I wouldn't do that to save like 5 keystrokes in a terminal.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Fair enough, TIL. I've used mount -a a fair bit, but unmounting the world is not something that crossed my mind to even attempt. It would still unmount a good dozen ZFS datasets for me.

    Good example with the Snaps! Corrected my post.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    And using loads of sensitive permissions to pull it off, like accessibility to read the screen. It's not stealing the auth cookies from the app nor throwing exploits at Android to escape the sandbox.

    Headline definitely makes it sound like it's a drive-by exploit, but no it's just the usual social engineering everyone is familiar with.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    It's kernel level anticheat, it can do whatever it wants. It's on the same level as the operating system.

    Realistically? Nobody's gonna bundle Linux filesystem drivers in malware just in case. If someone is to exploit Vanguard for malware I'd expect a credentials stealer to take your Steam and Discord accounts. Ransomware would likely spread to the NAS but that can be mitigated with readonly permissions where appropriate, and backups/shadow copies.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Docker would still go through the kernel for the mount, that's one of the few things Docker can't do because it's the same kernel as the host.

    That said I doubt it's been removed from the kernel, only the Samba server. OP is a client.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Definitely very subjective. People keep saying macOS has amazing font rendering but for me it just looks like a blurry mess, especially on non-retina displays. My fonts are set to be as sharp as possible on Linux because when coding and in the terminal I want very sharp fonts so they're easier to read for me.

    Seconding the dependence on the particular font as well. Cantarell, Ubuntu and OpenSans are all fairly blurry regardless, unless seen on HiDPI screens in which case they do look more like macOS. DejaVu Sans can be very sharp in contrast at very low resolutions because it's been made in the 800x600 and 1024x768 days and optimized to look sharp when small.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    There's some problem with a federated previews: tricking one instance into generating the wrong preview would spread to every instance. It's been exploited for malware and scam campaigns in message apps.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Even without Cloudflare, simple NGINX microcaching would help a ton there.

    It's a blog, it doesn't need to regenerate a new page every single time for anonymous users. There's no reason it shouldn't be able to sustain 20k requests per second on a single server. Even a one second cache on the backend for anonymous users would help a ton there.

    They have Cloudflare in front, the site should be up with the server being turned off entirely.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Masquerading a normal looking link for another one, usually phishing, malware, clones loaded with ads.

    Like, lets say I post something like

    https://www.google.com

    And also have my instance intercept it to provide Google's embed preview image, and it federates that with other instances.

    Now, for everyone it would look like a Google link, but you get Microsoft Google instead.

    I could also actually post a genuine Google link but make the preview go somewhere else completely, so people may see the link goes where they expect even when putting the mouse over it, but then they end up clicking the preview for whatever reason. Bam, wrong site. Could also be a YouTube link and embed but the embed shows a completely different preview image, you click on it and get some gore or porn instead. Fake headlines, whatever way you can think of to abuse this, using the cyrillic alphabet, whatever.

    People trust those previews in a way, so if you post a shortened link but it previews like a news article you want to go to, you might click the image or headline but end up on a phony clone of the site loaded with malware. Currently, if you trust your instance you can actually trust the embed because it's generated by your instance.

    On iMessage, it used that the sender would send the embed metadata, so it was used for a zero click exploit by sending an embed of a real site but with an attachment that exploited the codec it would be rendered with.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    That's a good feature, politics on social media is just a cesspool these days. There's nothing worth seeing because there's no political discussion, just name calling, climate change denialism, racism and transphobia.

    Meta's well aware that everyone's tired of that uncle that just won't shut up.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Some of those keys are public knowledge and only serves to tie what client it is (Chromium, Firefox, Safari probably) or otherwise stolen from one of those. This is a safe browsing API key, it's used to check if sites have been marked as phishing/scam/etc and is used to warn users that the site is known to be malicious. Others are used to tie analytics or ads to the app, so it goes into the right developer's account metrics.

    I wouldn't call those leaked, they're meant to be embedded into apps and aren't considered as secret keys.

    It's common practice to use API keys like that even if they're not so secret, just for the sake of tracking which app is making what requests and so people can't just openly use the API. You can easily shut down unapproved clients by just rolling out a new key, and it causes an annoying whack-a-mole game to constantly have to extract them from an APK.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    https://github.com/googleads/googleads-mobile-flutter/issues/622

    It looks like it used to be bundled as part of binaries shipped by Google with the Google Ads SDK so that'd be why it's not exactly documented. Developers just bundle it in their app and presto, ads are displayed.

    I'd be skeptical of scanners just spewing "security vulnerabilities". That malware report is of very poor quality, and they incorrectly identified this key as an API key leak with no idea what it is nor what it does because it's not relevant. It's also claiming it's downloading files... using private IP addresses in the 10.0.0.0/8 range? Nonsense. That report is a lame report to pad their portfolio of "security researchers".

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    I'll never understand the people that fake these kinds of things. Fake watches, fake followers, fake views, fake likes, fake jobs. Why?

    What's attractive about likes and views anyway? Why would I care that my date has 0 followers or a million followers? If anything it means they'll constantly be busy streaming.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    There's always the command escape hatch. Ultimately the roles you'll use will probably do the same. Even a plugin would do the same, all the ZFS tooling eventually shells to the zfs/zpool, probably same with btrfs. Those are just very complex filesystems, it would be unreliable to reimplement them in Python.

    We use tools to solve problems, not make it harder for no reason. That's why command/shell actions exist: sometimes it's just better to go that way.

    You can always make your own plugin for it, but you're still just writing extra code to eventually still shell out into the commands and parse their output.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    It could be a disk slowly failing but not throwing errors yet. Some drives really do their best to hide that they're failing. So even a passing SMART test I would take with some salt.

    I would start by making sure you have good recent backups ASAP.

    You can test the drive performance by shutting down all VMs and using tools like fio to do some disk benchmarking. It could be a VM causing it. If it's an HDD in particular, the random reads and writes from VMs can really cause seek latency to shoot way up. Could be as simple as a service logging some warnings due to junk incoming traffic, or an update that added some more info logs, etc.

    Why is replacement for home device controls so complicated?

    I recently learned about Home Assistant here on Lemmy. It looks like a replacement for Google Home, etc. However, it requires an entire hardware installation. Proprietary products just use a simple app to manage and control devices, so can someone explain why a pretty robust dedicated device is necessary as a replacement? The...

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Even then, those requirements are easily satisfied by a Raspberry Pi and most other SBCs out there. Seems rather reasonable to dedicate one to HA. It's not too crazy when you take into consideration how powerful cheapo hardware can be these days.

    Creating a self-contained binary ( www.github.com )

    I have a program (fldigi, pointed to by the github link) that uses dozens of shared libraries. I would like to be able to distribute a pre-compiled version of the program for testers. I could require each tester to install the shared libraries and compile the program for themselves, however, this would be extremely difficult...

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Seems like one of the few good use cases for AppImages.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    At this point every new Android feature people get excited about is stuff we had even in the CyanogenMod days. But gotta generate hype so people get FOMO and buy new devices, maximizing ewaste.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    A bigger desk so I can just roll the chair a few inches to switch to the work laptop.

    My original plan was a keyboard/mouse only KVM, probably a Teensy or a RPi or something of the sorts. But I got lazy as the extra desk space has just made it a non-issue for me. I also have a Logitech mouse that can switch between devices, so if I was going to really need that setup I'd probably just get the matching keyboard.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines