conorab

@conorab@lemmy.conorab.com

This profile is from a federated server and may be incomplete. View on remote instance

conorab ,

Steam hardware survey but that will skew towards gamers. That said, it would be a good indicator on how compatible Wayland is.

conorab ,

Absolutely!

conorab ,

I wish we had something like temporary/alias e-mail addresses for physical addresses. So you go to ship something, you provide a shipping alias which the shipping company then derives the true address from and ships the item. The moment the true address is revealed, the alias expires and can no longer be used. This way only the shipping company gets to know your real address and that is ideally discarded once the order has been completed. So forward shipping without the extra step.

conorab ,

My preference would be for WHOIS data to be private unless the owner wants to reveal who they are. I do think it makes sense to require the owner to provide that information to the registrar so it can be obtained by the courts if needed.

conorab ,

If sellers can prove that they never touch a customers home address they’re less exposed to data breaches which might look good on for insurance companies.

Honestly, this sounds it something a shipping company could provide. When you go to use Paypal for example, you get redirected to their site, put in your details and they complete the transaction without the seller knowing your financial data. The same could be done with shipping.

conorab ,

This is why people say not to use USB for permanent storage. But, to answer the question:

  • From memory, “nofail” means the machine continues to boot if the drive does not show up which explains why it’s showing up as 100GB: you’re seeing the size of the disk mounted to / .
  • If the only purpose of these drives is to be passed through to Open Media Vault, why not pass through the drives as USB devices? At least that way only OMV will fail and not the whole host.
  • Why USB? Can the drives ve shucked and connected directly to the host or do they use a propriety connector to the drive itself that prevents that?
conorab ,

It does have to do with being a walled platform though. You as the Discord server owner have zero control over whether or not you are taken down. If this was Lemmy or a Discourse server (to go with something a little closer to walled garden) that they ran, the hosting provider or a court would have to take them down. Even then the hosting provider wouldn’t be a huge deal since you could just restore backup to a new one Pirate Bay style. Hell, depending on whether or not the devs are anonymous (probably not if they used Discord), they could just move the server to a new jurisdiction that doesn’t care. The IW4 mod for MW2 2009 was forked and the moved to Tor when Activision came running for them so this isn’t even unprecedented.

conorab ,

That’s not really fair on Discord. The article mentions they received an injunction to remove the content so they were forced to do this. Anybody in the same jurisdiction would have to do the same:

“Discord responds to and complies with all legal and valid Digital Millennium Copyright Act requests. In this instance, there was also a court ordered injunction for the takedown of these materials, and we took action in a manner consistent with the court order,” reads part of a statement from Discord director of product communications Kellyn Slone to The Verge.

conorab ,

UEFI or legacy BIOS? I recently installed Windows 11 on a machine with Proxmox on NVME but installed Windows on a SATA SSD. Windows added its boot entry to the NVME SSD but did not get rid of the Proxmox boot entry.

I’ve definitely had the same issue as you on in the past on legacy BIOS and when I worked in a computer shop 2014-2015 we always removed any extra drives before installing Windows to avoid this issue (not like the other drives had an OS anyway).

conorab ,

It’s a gaming machine. I mainly use a gaming VM with GPU passthrough under Proxmox, but the anti-cheat is some games (Fortnite and The Finals) don’t allow you to run them in VMs. So I run those games in Windows directly under a standard user account as a compromise.

conorab ,

I kinda get it. The host has complete access to VM memory and can manipulate it without detection. Both of those games are free to play as well so cheating is more of an issue. I have no idea what Back4Blood’s justification would be though.

That said it’s a PITA and given the massive attack surface of Easy Anti Cheat it becomes easier to justify running in VMs where you can isolate things and use snapshots if there is ever a breach.

conorab ,

Wait… so the author displayed in “by <author>” is the supposed author of the software, not the one that put it on the store? That’s insane! Also sounds like you’d be open to massive liability since the reputation of the software author will be damaged if somebody publishes malware under their name.

It should be:

  • Developed by: <author of software>
  • Uploaded by: <entity who uploaded to store>

Is anyone else having trouble with DuckDNS today?

I selfhost some tools and use duckdns for names resolution but it does not seem to be working today. My ISP updated some things on there end last night and I updated my local unbound server so I'm not sure which if either of these is also playing a role but I cant seem to access duckdns. I have tried to connect to it on my phone...

conorab ,

Out of curiosity, why use a forwarder if you run your own DNS? Why not handle resolutions yourself?

conorab ,

Oh I completely misunderstood! I thought it was a forwarder, not dynamic DNS. My bad! Makes total sense!

conorab ,

Soooo…. the work of self-hosting with none of the benefits? It sounds like this has all the core problems of Twitter.

conorab ,

The APs know who the Wi-Fi clients are and just drops traffic between them. This is called client/station isolation. It’s often used in corporate to 1) prevent wireless clients from attacking each other (students, guests) and 2) to prevent broadcast and multicast packets from wasting all your airtime. This has the downside of breaking AirPlay, AirPrint and any other services where devices are expected to talk to each other.

conorab ,

The more SSIDs being broadcast the more airtime is wastes on broadcasting them. SSIDs are also broadcast at a much lower speed so even though it’s a trivial amount of data, it takes longer to send. You ideally want as few SSIDs a possible but sometimes it’s unavoidable, like if you have an open guest network, or multiple authentication types used for different SSIDs.

conorab ,

When buying disks do some research for the exact model to ensure they are not SMR drives if you plan on using them in RAID. Some manufacturers will not tell you if they are SMR drives and this can do anything from tank write performance to make the RAID reject the drive entirely.

See: https://arstechnica.com/gadgets/2020/04/caveat-emptor-smr-disks-are-being-submarined-into-unexpected-channels/

vSphere+Debian+KDE Plasma=Crash?

I needed a test VM at work the other day so I just went with Debian because why not, during the install I chose KDE plasma as the DE. I did nothing else with it after installing it and after leaving it alone for a while (somewhere between 20-60 minutes) the CPU useage shot up to the point vSphere sent out an alert and the VM was...

conorab ,

Seconding the RAM issue possibility. If you can, shut down the host and run a memory test over a few days to see if it trips. Memory tests can take days to trip in some circumstances, in others, it’s immediate.

conorab ,

Also holy shit: Club Penguin! This kind of thing has been around forever!

conorab ,

Seperate DB container for each service. Three main reasons: 1) if one service requires special configuration that affects the whole DB container, it won’t cross over to the other service which uses that DB container and potentially cause issues, 2) you can keep the version of one of the DB containers back if there is an incompatibility with a newer version of the DB and one of the services that rely on it, 3) you can rollback the dataset for the DB container in the event of a screwup or bad service (e.g. Lemmy) update without affecting other services. In general, I’d recommend only sharing a DB container if you have special DB tuning in place or if the services which use that DB container are interdependent.

conorab ,

Good! You wanna automate away a human task, sure! But if your automation screws up you don’t get to hide behind it. You still chose to use the automation in the first place.

Hell, I’ve heard ISPs here work around the rep on the phone overpromising by literally having the rep transfer to an automated system that reads the agreement and then has the customer agree to that with an explicit note saying that everything said before is irrelevant, then once done, transfer back to the rep.

conorab ,

Run at home/lab to learn AD and also gives you a place to test out ideas before pushing to production. You may be able to run a legit AD server with licensing on AWS or similar if they have a free tier.

conorab ,

Damn! Using .af for a LGBT+ site is insane! The country could have redirected the domain to their own servers and started learning the personal details of those on the site who I imagine wouldn’t be terribly thrilled having an anti-LGBT+ government learn their personal information (namely information not displayed publicly). Specifically, they could put their own servers in front of the domain so they can decrypt it, then forward the traffic on to the legitimate servers, allowing them to get login information and any other data which the user sends or receives.

conorab ,

The term you may be looking for is “woozle”. :)

conorab ,

I used to have all VMs in my QEMU/KVM server on their own /30 routed network to prevent spoofing. It essentially guaranteed that a compromised VM couldn’t give itself the IP of say, my web server and start collecting login creds. Managing the IP space got painful quick.

conorab ,

People who do not wish to buy a GTLD can use home.arpa as it is already reserved. If you are at the point of setting up your own DNS but cannot afford $15 a year AND cannot use home.arpa I’d be questioning purchasing decisions. Hell, you can always use sub-domains in home.arpa if you need multiple unique namespaces in a single private network.

Basically, if you’re a business in a developed country or maybe developing country, you can afford the domain and would probably spend more money on IT hours working around using non-GTLDs than $15 a year.

conorab ,

A good move!

I’m surprised they didn’t codify “.lan” though since that one is so prevalent.

conorab ,

Buying your own domain often includes DNS hosting but that’s not really the point unless all you’re doing is exclusively running an externally-facing website or e-mail. The main reason for buying a domain online is so everybody else recognises you control that namespace. As a bonus, it means you can get globally-cognised SSL certificates which means you no longer have you manage your own CA and add it’s root to all the devices which wish to access your services securely. It’s also worth noting that you cannot rely on external DNS servers for entries that point to private IPs, because some DNS servers block that.

conorab ,

Some servers blacklist you no matter what you do because you’re not a big player in the e-mail space… Outlook. Fuck Outlook. M365 doesn’t do that though.

Also the idea that reverse IPs are needed (in practice) when SPF, DKIM and DMARC are in use is insane. I have literally told you my public key and signed the e-mail. It’s me. You don’t need to check the damn PTR!

conorab ,

If your domain will NEVER send e-mail out, you only really need and SPF record to tell other servers to drop e-mail FROM your domain. Even that’s somewhat optional. If you ever plan on sending ANY outbound (you should at very least for the occasional ticket) then do DKIM, DMARC and SPF. The more of these you do, the less likely e-mails FROM your domain are to be flagged as spam.

conorab ,

I feel like there’s more to your question but here goes with the starter answer: install https://github.com/LizardByte/Sunshine on the computer which is running the game and https://github.com/moonlight-stream/moonlight-qt on the machine which will receive the game stream. I have Sunshine installed in a VMware Fusion VM running Windows which I stream to the host Mac since Discord doesn’t let you screenshare VMs with sound otherwise. I have also used Moonlight on my Mac to stream games from a cloud machine on https://airgpu.com but only played with it a tiny bit as a substitute for running my own game streaming machine in AWS or for some games that aren’t on GeForce NOW.

conorab ,

Ooh GoW looks quite neat!

conorab ,

I don’t have a problem with training on copyrighted content provided 1) a person could access that content and use it as the basis of their own art and 2) the derived work would also not infringe on copyright. In other words, if the training data is available for a person to learn from and if a person could make the same content an AI would and it be allowed, then AI should be allowed to do the same. AI should not (as an example) be allowed to simply reproduce a bit-for-bit copy of its training data (provided it wasn’t something trivial that would not be protected under copyright anyway). The same is true for a person. Now, this leaves some protections in place such as: if a person made content and released it to a private audience which are not permitted to redistribute it, then an AI would only be allowed to train off it if it obtained that content with permission in the first place, just like a person. Obtaining it through a third party would not be allowed as that third party did not have permission to redistribute. This means that an AI should not be allowed to use work unless it at minimum had licence to view the work. I don’t think you should be able to restrict your work from being used as training data beyond disallowing viewing entirely though.

I’m open to arguments against this though. My general concern is copyright already allows for substantial restrictions on how you use a work that seem unfair, such as Microsoft disallowing the use of Windows Home and Pro on headless machines/as servers.

With all this said, I think we need to be ready to support those who lose their jobs from this. Losing your job should never be a game over scenario (loss of housing, medical, housing loans, potentially car loans provided you didn’t buy something like a mansion or luxury car).

conorab ,

Other comments have hit this, but one reason is simply to be an extra layer. You won’t always know what software is listening for connections. There are obvious ones like web servers, but less obvious ones like Skype. By rejecting all incoming traffic by default and only allowing things explicitly, you avoid the scenario where you leave something listening by accident.

conorab ,

Can’t even get the ISO anymore. 😭

conorab ,

To be fair, you can probably find it on Archive.org. Would be kinda neat if somebody made MaymayOS that just had theme packs for the other meme distros to keep them alive.

ajayiyer , to Linux
@ajayiyer@mastodon.social avatar

Gentle reminder to everyone that support for ends in about 90 weeks. Many computers can't upgrade to Win 11 so here are your options:

  1. Continue on Win 10 but with higher security risks.
  2. Buy new and expensive hardware that supports Win11.
  3. Try a beginner friendly distro like . It only takes about two months to acclimate.

@nixCraft @linux @windowscentralbot

conorab ,

Another option may be to use Windows Server 2022 Eval. You may run in to problems with software refusing to run on a server though. The initial eval lasts 180 days, but you can run a command to extend that 5 times (don’t quote me on the exact number) which will give you an updated system for years to come.

Work inside the machine of the music industry: How pre-saves and algorithmic marketing turn musicians into influencers ( algorithmwatch.org )

Streaming platforms allow users to add upcoming tracks to their playlists, in order to listen to them as soon as they are released. While this sounds harmless, it changed the habits of independent musicians, who feel they have to adapt to yet another algorithm.

conorab ,

We’re going to hold this song back from you and ask for a bunch of your details so you can listen to it once we’ve generated some extra hype. Pretty cool huh?!

conorab ,

Sounds like you just volunteered to post Linux news and related content!

conorab ,

The article seems to indicate they are using to reduce the amount of work that have to do in writing prompts, but still have translators review what the AI spits out. I think that’s different to SuperDuo which I believe is mean’t to use AI to be more conversational.

How often do you back up?

I was wondering how often does one choose to make and keep back ups. I know that “It depends on your business needs”, but that is rather vague and unsatisfying, so I was hoping to hear some heuristics from the community. Like say I had a workstation/desktop that is acting as a server at a shop (taking inventory / sales...

conorab ,
  • Personal and business are extremely different. In personal, you backup to defend against your own screwups, ransomware and hardware failure. You are much more likely to predict what is changing most and what is most important so it’s easier to know exactly what needs hourly backups and what needs monthly backups. In business you protect against everything in personal + other people’s screwups and malicious users.
  • If you had to setup backups for business without any further details: 7 daily, 4 weekly, 12 monthly (or as many as you can). You really should discuss this with the affected people though.
  • If you had to setup backups for personal (and not more than a few users): 7 daily, 1 monthly, 1 yearly.
  • Keep as much as you can handle if you already paid for backups (on-site hardware and fixed cost remote backups). No point having several terabytes of free backup space but this will be more wear on the hardware.
  • How much time are you willing to lose? If you lost 1 hour of game saves or the office’s work and therefore 1 hour of labour for you or the whole office would it be OK? The “whole office” part is quite unlikely especially if you set up permissions to reduce the amount of damage people can do. It’s most likely to be 1 file or folder.
  • You generally don’t need to keep hourly snapshots for more than a couple days since if it’s important enough to need the last hours copy, it will probably be noticed within 2 days. Hourly snapshots can also be very expensive.
  • You almost always want daily snapshots for a week. If you can hold them for longer, then do it since they are useful to restoring screwups that went unnoticed for a while and are very useful for auditing. However, keeping a lot of daily snapshots in a high-churn environment gets expensive quickly especially when backing up Windows VMs.
  • Weekly and monthly snapshots largely cover auditing and malicious users where something was deleted or changed and nobody noticed for a long time. Prioritise keeping daily snapshots over weekly snapshots, and weekly snapshots over monthly snapshots.
  • Yearly snapshots are more for archival and restoring that folder which nobody touched forever and was deleted to save space.
  • The numbers above assume a backup system which keeps anything older than 1 month in full and maybe even a week in full (a total duplicate). This is generally done in case of corruption. Keeping daily snapshots for 1 year as increments is very cheap but you risk losing everything due to bitrot. If you are depending on incrementals for long periods of time, you need regular scrubs and redundancy.
  • When referring to snapshots I am referring to snapshots stored on the backup storage, not production. Snapshots on the same storage as your production are only useful for non-hardware issues and some ransomware issues. You snapshots must exist on a seperate server and storage. Your snapshots must also be replicated off-site minus hourly snapshots unless you absolutely cannot afford to lose the last hour (billing/transaction details).
conorab OP ,

I wish XMPP had stuck around. I used to run a Prosody server and it worked well enough but I think the E2E keys would occasionally need to be fixed. I used Conversations on Android as a client at the time. The things that makes me hesitate to dedicate too much effort to Matrix are:

  1. the supposed funding issues they’re having (which is part of why I paid for hosting)
  2. the FOSS’ communities seeming tendency to keep jumping messaging platforms and so there’s never a chance for one to gain critical mass
  3. how buggy the web client and Element iOS client have been.

When I stopped running an XMPP server I switched the only other user over to Signal and we’ve stuck there since. With how buggy the Element iOS client, Fluffy Chat and web client have been for me (app crashes when joining rooms, rooms don’t exist when they in fact do), I don’t want to risk an upset by trying to push people there since Signal is good enough. And these are all issues that exist when the company who makes Matrix (plus contributors of course) are the ones running the server.

At this point I’m just inclined to grab the export they provide and switch to matrix.org for the 1 or 2 rooms I care to have a presence in.

conorab ,

Unless I am mistaken, the total number the other comment is raising is how much power the entire network spent calculating the transaction, not how much the winner (the one who got paid out) spent. You calculate the energy consumption of the entire network because that power was still spent on the transaction even if the rest of the network wasn’t rewarded. I have no idea if the numbers presented are correct but the reasoning seems sensible. Maybe I’m wrong though. :)

conorab ,

Yep my bad! I mis-remembered .local/share/steam as . cache/share/steam. :)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines