@chiisana@lemmy.chiisana.net avatar

chiisana

@chiisana@lemmy.chiisana.net

This profile is from a federated server and may be incomplete. View on remote instance

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Too bad it sounds like they’re SMR drives. Else it might be fun to shuck them for SFF servers.

Problems with creating my own instance

I am currently trying to create my own Lemmy instance and am following the join-lemmy.org docker guide. But unfortunately docker compose up doesn't work with the default config and throw's a yaml: line 32: found character that cannot start any token error. Is there something I can do to fix this?...

chiisana ,
@chiisana@lemmy.chiisana.net avatar

If memory serves, the default docker compose expose the database port with a basic hard coded password, too. So imagine using the compose without reading too much, next thing you know you’re running a free Postgres database for the world.

Edit: yep, still publishing the db port with hard coded password…

chiisana ,
@chiisana@lemmy.chiisana.net avatar

BuyVM has $24s/yr KVM server that you can attach storage at $5/TB/mn. So 5TB should set you back $325/yr all in. They’ve been around for quite some time — I’ve been client since 2011 — so they’re not likely to disappear anytime soon.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

No multi-region unless you roll it yourself. Their offerings are primarily web hosting centric, so you’d need to do the heavy lifting yourself if you want more infra. Also worth noting that they're definitely not in the same league as the big players, they’re just an old vendor that isn’t likely to disappear on you.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

There’s two ways around the symptoms you’re trying to treat:

  1. Don’t bother with internal vs external. Always route through external which gets encrypted by the origin cert to CloudFlare and then CloudFlare to your browser. This is simplest in that you don’t need to manage two sets of DNS records and you don’t end up with different certificates for the same domain (in the odd event where you end up needing to do something like certificate pinning). Or;
  2. Just add the origin cert to your systems’ trust store. You know the certificate, it will encrypt the traffic anyway, also you’re accessing the service via intranet so there’s really no attack vector here.

Probably worth calling out that although 1 feels like there’s more hops (and there absolutely are), with any decent internet, you’re probably not going to feel it. This is because the edge server is probably situated very close to your ISP (that’s how they make sure everything responds quickly) so your over all round trip should only be affected by a negligible amount of time that you most likely won’t notice.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

There was similar things done on Reddit during the big exit. I doubt it achieved what people expected it to achieve. Even if they’re not visible externally, I’m sure they can easily access (thereby make deals to license) the data out of their backend / backup; just a matter of how hard they want to try (hint: it’s really not very hard).

chiisana ,
@chiisana@lemmy.chiisana.net avatar

The RAID rebuild time is going to be longer than the OEM warranty… love it!

chiisana ,
@chiisana@lemmy.chiisana.net avatar

When I was younger, I bought a fair bit of music CDs, mostly for the sake of collecting. To this day, most are still unopened in their original plastic wrap. I no longer have a disc player in any of my computers, nor any functional discman left in my possession, so listening/ripping them is probably never going to happen.

Sometimes I see people complaining about digital versions, but looking back, it probably really don’t matter nearly as much for vast majority of the cases…

chiisana ,
@chiisana@lemmy.chiisana.net avatar

If you’ve skimmed over the original publication, it is actually worse: it is a non problem. Other users have pointed out already: ~100MB served over 5 minutes period is quite literally nothing for even one small VPS serving the content independently, let alone a site with CloudFlare in front like they claim to have.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Or 4) Ignore noise and do nothing; this is a case of user talking about things they don’t understand at best, or a blog intentionally misleading others to drum up traffic for themselves at worst. This is literally not a problem. Serving that kind of traffic can be done on a single server without any CDN and they’ve got a CDN already.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

AWS charges $0.09/GB. Even assuming zero caching and always dynamically requested content, you’d need 100x this “attack” to rack up $1 in bandwidth fees. There are way faster ways to rack up bandwidth fees. I remember the days where I paid $1/GB of egress on overage, and even then, this 100MB would’ve only set me back $0.15 at worst.

Also worth noting that those who’d host on AWS isn’t going to blink at $1 in bandwidth fees; they’d be hosting else where that offers cheaper egress (I.e. billed by megabits or some generous fixed allocation); those that are more sane would be serving behind CDNs that’d be even cheaper.

This is a non-issue written by someone who clearly doesn’t know what they’re talking about, likely intended to drum up traffic to their site.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Fortunately, you’d be very hard pressed to find bandwidth pricing from 18 years ago.

The point is the claimed issue is really a non issue, and there are much more effective ways to stress websites without needing the intermediary of fediverse.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

If you’re feeling that it will take too much time to maintain something you’re deploying, then there may also be toolset/skillset mismatch. Take Docker/K8s that you’ve called out for example; they’re the graduated steps to deploy things in the industry. Things deployed via Docker drastically reduces the amount of time to get up and running by eliminating large swaths of dependency management, as well as gives option to use tools on platform to manage self updates if that’s something desired (though this could potentially introduce failures by manual upgrade steps where required). You’d graduate to k8s as your infrastructure footprints start to grow. Learning the correct tools could potentially reduce the barrier to entry and time requirements on the apps front.

Having said that, it is probably better to ask the inverse: what is it that you’re trying to achieve and why?

Without a reason that resonates well with you, you’re not going to find time in your allegedly already life to maintain to keep it working. Nor will you be willing to find the time to learn the correct tools to deploy these things.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Have you seen OwnTracks?

chiisana ,
@chiisana@lemmy.chiisana.net avatar

I played with it forever ago, but from memory, that is most likely due to the way it is designed to conserve battery. The app waits for significant location update notifications from the OS and then sends the updated location to the tracking server. It doesn’t (or I should say it didn’t as I don’t know about now) actively poll the location on fixed intervals.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Been forever since I did any work with cryptography, but if my memory is correct:

Alice needs Bob’s public key to verify a signed message from Bob haven’t been altered;

Bob needs Alice’s public key to encrypt a message that can only be decrypted by Alice;

If Bob sends Alice a message encrypted with Alice’s public key, signed with Bob’s private key, containing “Hello, how are you?” ; this message could be verified as authentic by Charlie using Bob’s public key but Charlie cannot see the contents of the message as Charlie does not have Alice’s private key.

Without Alice disclosing their private key, how can Charlie review the content of a reported message from Alice claiming Bob sent them something inappropriate?

I.e. how can Charlie be certain if Alice claims Bob sent “cats are evil” when Charlie cannot decrypt the original message, only verify the original message have not been altered via Bob’s public key.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Aha! Something just clicked — been thinking continuously since before the original reply. The answer is … more signing and maybe even more keys!

A message would be signed multiple times.

If Bob wants to send Alice “Hello, how are you?” the plain text would be signed with Bob’s general private key that could be verified with Bob’s general public key. This would allow Alice to forward this message to anyone while they could still verify it did indeed came from Bob.

The plain text and signature is then encrypted with one of Alice’s public keys, so only Alice could decrypt it to see the message and signature. This may be a thread specific key pair for Alice so they’re not re-using same keys between different threads.

The encrypted message is then again signed by Bob, using one of Bob’s private key, so that Alice can know the encrypted message has not been altered. This here could also be the thread specific key as noted above.

If Alice were to report Bob, Alice will need to include both the plaintext and the internal signature. This way the internally signed message could be reviewed if the plaintext and signature were forwarded to moderation for review by Charlie (just need to verify the signature against plaintext with Bob’s public key), while the exchange should be secure to only Alice and Bob.

Et voila!

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Having seen some spicy pillows in my times… I’d hate to be onboard if any of the battery containers becomes a bouncy castle.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Not necessarily just yaml — there are things yaml cannot do well, but even ignoring that, traefik can also use toml, or container labels — but rather, the entire concept of infrastructure as code is way better than GUIs. Infrastructure as code allows for much better linting, testing, and version controls thereby providing better stability and reproducibility.

Is it that difficult to run Mastodon over Docker?

I am used to simple things running on Docker (Jellyfin, Nextcloud, etc.) I am looking at running my own personal Mastodon instance (maybe share it with a few friends and family), but I like using Docker. Looking at install guides, the steps required seem to be much harder than just editing docker-compose.yml and running the...

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Most providers offer some kind of OS reload and you may be able to use custom ISOs for the process. However, that doesn’t change the fact that if you don’t want to change OS (especially if you’re already using something more commonly seen in production environments like Debian), then you shouldn’t change the OS.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

If you have enough drive bays, I'd probably shutdown the server, live boot into any linux distro without mounting the drives, then use dd to copy from 1st 256GB to 1st 500GB, from 2nd 256GB to 2nd 500GB, then boot the system, and use resize2fs to expand the file system to fill the partition.

Since RAID1 is just a mirror, the more adventurous type might say you can just hot swap one drive, let it rebuild, then hot swap the other, let it rebuild again, and then expand the file system all online live. Given it is only 256GB of data max, on a pair of SSD, it shouldn't take too long, but I'm more inclined to do it safely.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Can’t wait to see how these compares to the M series from Apple. More competitions should be good for both platforms, forcing them to push performance further.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

I don’t understand how this could be the issue.

If you’re using Google Workspace, Google will give you the appropriate DMARC, DKIM and SPF records to add to your DNS. The NS themselves should resolve the records and provide the recipient server with the values you’ve entered, thereby ensuring delivery.

Does the free DNS on NameCheap no longer allow certain types of records? Aren’t those mail specific DNS records all just TXT/CNAME records now (no more weird legacy SPF record type), which are fairly basic and typical?

chiisana ,
@chiisana@lemmy.chiisana.net avatar

No it does not make any sense. There are literally thousands of domain registrars out there; almost every single last one of them will offer free DNS service with registration. Also, more specifically speaking, DNS provider host provider look up is not even part of email delivery flow.

The most well known spam registrar is GoDaddy as they spam ads everywhere, and everyone and their third cousin’s dogs know about them. NameCheap is a large registrar but isn’t that big of a fish comparatively speaking. But, regardless, blocking any registrars that size the way you’re describing would break way more businesses and hurt the recipient provider’s own reputation. This honestly starting to sound more and more like a smear campaign as opposed to anything grounded in actual technology.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

The name servers themselves is not part of the equation. The commonality in all those linked are sending emails from Namecheap’s shared hosted email/website, not name servers. Sending email from shared hosted email/website is asking for trouble, doesn’t matter who you’re hosting with, because those IP range are always abused, especially with the larger providers, simply due to a larger exposure. The detection mechanism here is really simple and observable via raw mail headers by checking the Received: line. Filtering emails from this information here is a typical part of the anti-spam model. A typical implementation would be via DNSBL providers such as Spamhaus, Sorbs and alike. The solution is always to use trusted transaction email services to deliver email from the website instead.

That, however, is a very different problem than the dedicated email services like Google Workspace Gmail, because you’d not be sending from your web server’s IP address, but rather via Google’s dedicated range. As such, the Recevied: line is much less likely to yield a match in DNSBLs. Validation for these are then done via the SPF/DKIM/DMARC records on your domain, checking if your configuration permits delivery from server at the Recevied: line (look for Received-SPF) and whether or not you have the appropriate signing (look for Authentication-Results: and bits about the various stages of DKIM and DMARC).

chiisana ,
@chiisana@lemmy.chiisana.net avatar

NetApp is big in enterprise DAS space; think big server rack with highly redundant components to provide block storage devices to multiple workstations in the office. If I remember correctly, they're also the ones where their drives are formatted with 520 bytes per sector, and you'd need to reformat them using sg_format to 512 bytes per sector before you can use them with some systems.

WTF is up with switches?

Okay, I've been watching lots of YouTube videos about switches and I've just made myself more confused. Managed versus unmanaged seems to be having a GUI versus not having a GUI, but why would anyone want a GUI on a switch? Shouldn't your router do that? Also, a switch is like a tube station for local traffic, essentially an...

chiisana ,
@chiisana@lemmy.chiisana.net avatar

There is only one router on your network. It routes traffic from one machine to another. This is typically also the gateway, and it only has so many ports.

If you want more physical devices connected to your network, you’d need switches to fan out your network.

Un-managed switches essentially takes packets from one port and pass them through another port, easy peasy, nothing fancy.

Managed switches, however, can do more than just take packet from one port, then push it out to the other side. You can set up link aggregation for example, allowing more throughput by using two or more ports to go to the same destination (maybe for example a central file server). You can have L2 vs L3 switches so they route differently. You can have multiple paths to reach another machine, for redundancy but must implement STP to prevent broadcast loops etc.

Once your network grows larger than just Internet for a couple of desktops, it gets a lot more interesting.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

If you use everything from the same vendor, you could manage them in one place (see Ubiquiti’s UniFi stack as example), but at the end of the day, they serve different purposes and target different parts of your network.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

0.19 counts active users differently; prior to 0.19, the count is only if the user posted, after 0.19, all interactions results in the user being counted as an active user. This inflated the active users hugely as all lurkers are counted.

The active users is dwindling. You can see the steep drop off prior to the change and a slow but continued decline after the update.

I do not know the reason for the number of posts falling off, but that doesn’t look healthy either to be honest.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

At least from the nerd side of Lemmy, communities pertaining to technology, self-hosting, etc. — which I’d imagine to be the larger drivers due to how complicated it is to join compared to a traditional centralized setup (see also same hurdle for mastodon vs Twitter; which doesn’t gain adoption until Thread and BlueSky started to attract the less technical users), I’m seeing troubling signs of slowing down and shrinking.

If people actually want Lemmy in these areas to grow, it is important to be a lot more inclusive, and understand when to not participate in order to foster better community growth.

What I mean on the inclusive side is those FOSS advocates need to back off with the “You don’t understand FOSS, and go make your own instance” comments so other users don’t just bounce right off and leave after being bored with nothing to interact with.

What I mean by understand when not to participate is literally don’t participate in niche communities that doesn’t apply to you. So many Android users commenting irrelevant anti-Apple sentiments in Apple Enthusiasts community, for example. This is driving away actual users who are interested in discussions.

The charts don’t lie. Lemmy is shrinking, not growing. After getting a new lease on life with 0.19 due to what is essentially clever accounting, the community is still slowing down/shrinking. And for the nerdier side of the userbase, unless the community by and large start to interact more inclusively, the whole thing is sadly going to be just a small blip that’ll soon fizzle out.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

The linked toot states for publish on the altstore, which is not the App Store. Is there a different statement elsewhere stating otherwise?

chiisana ,
@chiisana@lemmy.chiisana.net avatar

So does altstore. With the core technology fees and application requirements, it is much much higher than the 99/yr developer fee.

Also that’s besides the point. I’m trying to learn if the developer said they’d publish on the App Store, not the AltStore.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

https://developer.apple.com/support/core-technology-fee

Also worth noting, distribution on third party App Store does not exempt developers from the developer program fee — Developers must sign a separate agreement to opt into the new business terms.

So, not only they are still responsible for the $99/yr fee, they’d need to sign a new agreement to opt into the new business terms, which has a separate additional fee schedule attached.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

I’ve seen that. Thanks! Now if only they’d enable it for Europe instead of forcing people to use their AltStore, which ends up costing them more, AND costing everyone else more.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

May I interest you in this $5 wrench?

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Honestly, neither does having to securely wipe SD card (or any storage device for that matter) as one cross the international border like the thread further up suggests. So the whole thing is just having fun with (potentially roleplaying) over paranoid people :)

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Approx 35k power on hours. Tested with 0 errors, 0 bad sectors, 0 defects. SMART details intact.

That’s about 4 years of power on time. Considering they’re enterprise grade equipment, they should still be good for many years to come, but it is worth taking into consideration.

I’ve bought from these guys before, packaging was super professional. Card board box with special designed drive holders made of foam; each drive is also individually packed with anti-static bags and silica packs.

Highly recommend.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Backblaze has drives with very similar models in service, has an annualized failure rate of less than 1% on average, and have been in service for 5 years. The average age will continue to rise as usage time continues to rack up.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

This is pretty standard for enterprise equipments — comes with some amount of years of warranty, enterprises depreciate the cost over that many years and sell them as/before the warranty expires to get whatever value they can get (as far as books concerned, they’re already depreciated to $0 anyway).

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Pretty sure that’s the usual preventive wear clicking sound that’s just part of newer drives’ design…?

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Skip ZFS unless you’re planning to get all 40 drives up front, which is pretty bonkers for a home server setup. Acquiring 40 drives incrementally and you’ll be hit with the hidden cost of ZFS.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

I think the biggest issue home users will run into (until the finally merged PR gets released later this year) is that as they acquire more drives, compared to a traditional RAID cluster that they could expand, they’re going to see more and more drives proportions being used for parity. Once vdev expansion is possible, the system would be a lot more approachable for home users who doesn’t acquire all the drives up front.

Having said that, this is probably a lot less of a concern for someone intending to setup 40 drives in RAID1, as they’re already ready to use half of it for redundancy…

chiisana ,
@chiisana@lemmy.chiisana.net avatar

There were some older language taken out of context, and they’ve since removed the wording after things blew up. Some people who knee jerk reacted will continue to hold their initial reaction and never change their opinion, others will take ages to convince otherwise.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Security when you’re on untrusted network. I can trust Google to snoop my banking data and update the spending power info on my ad profile, I can’t trust the random dude in trench coat also using the public wifi when I am traveling out of my roaming coverage.

I joke of course, but the security aspect is still valid.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Strictly speaking, Encrypted Client Hello (ECH) paired with DNS Over HTTPS (DOH) can resolve this. But not many people have their systems setup this way, so it is still pretty niche.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Most DNS requests are clear text, which is why DOH was introduced to obscure it such that no one can snoop on you looking up something-embarrassing.com. Also, the initial request, before you get the SSL certificate from the web server, you must tell the server at 169.169.169.169 that you’re looking for the certificate for something-embarrassing.com before they can get you the correct certificate. This is why ECH was introduced. Neither of which have became mainstream yet, and so there are still some basic leakage going on.

NPM - What services need what toggled? ( slrpnk.net )

Hiya, just got NPM installed and working, very happy to finally have SSL certs on all of my serivces and proper URLs to navigate to them, what a breeze! However, as I am still in the learning process: I am curious to know when to enable these three toggles and for what services. I assume the "Block Common Exploits", can always...

chiisana ,
@chiisana@lemmy.chiisana.net avatar

I don’t use NPM but if “Cache Assets” means what it means in the traditional sense, it wouldn’t affect most home deployments.

Historically, resources are limited and getting Apache to load images/javascript/CSS files from disk each time they’re requested, even if the OS kernel eventually caches them to RAM, was a resources intensive process. Reverse proxies stepped up and identifies assets (images, JS and CSS), and stores them in memory for subsequent requests. This reduces the load on the Apache web server and reduces the hops required to serve the request. Thereby making everything faster.

For homelabs, and single user systems, this is essentially irrelevant, as you’re not going to be putting so much load on the back end system to notice the difference. May be good to still turn it on, but if you’re noticing odd behaviors (ie updates to CSS or images not taking), it may be a good idea to turn it off to see if that’s the culprit.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Filled it!

I understand your experiment is already under way, so it is unlikely that you’d be able to change your methodologies at this point. One small feedback on the questions, however. As presented (to me, maybe the system is randomized, I don’t know) the questions felt leaning towards difficult/complex to use, which may lead the user skewing their responses negatively. While this may be counterweighted by the fact that you’re asking a niche community using these systems already to complete the survey, it may still be a good idea to ask more neutral questions and allowing the users to select from a spectrum instead.

For example; instead of “I find the system unnecessarily complex; Strongly Disagree… Strongly Agree”, it may potentially be better to ask “How do you find the system? Very Straightforward … Very Complex”. Your score for each of the selection would be consistent (1 is less complex while 5 is more complex), but you’re not impressing a negative sentiment on the user.

Anyway, good luck with your study! Looking forward to your published results!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines