@Dark_Arc@social.packetloss.gg avatar

Dark_Arc

@Dark_Arc@social.packetloss.gg

Hiker, software engineer (primarily C++, Java, and Python), Minecraft modder, hunter (of the Hunt Showdown variety), biker, adoptive Akronite, and general doer of assorted things.

This profile is from a federated server and may be incomplete. View on remote instance

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

The reason the US and Canadian governments are doing this is to stop that $10k car from destroying the auto motive industry in North America resulting in layoffs that make the recent tech layoffs look like peanuts.

I agree we need cheaper EVs in North America, I want one too... There's an Ars Technica article where Ford basically goes "we thought everyone wanted expensive trucks ... we made those electric ... we realize we missed the mark, we're going to work on smaller, cheaper, EVs." So, they are coming hopefully within the next couple of years.

I'm not sure how important manufacturing still is to the Canadian economy, but for the US economy ... trying to protect domestic production is important (and we should've done it years ago instead of letting cheap Chinese imports destroy a large amount of the factories in North America).

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

That reduces a lot of relevant context, like why they needed the 08 bailouts in the first place, how many times they've been bailed out, and the fact that China has heavily subsidized these cars to the point that even if they were making the same vehicle, it would be significantly more expensive.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

Agree on the first part ... disagree on the latter.

Joe has invested heavily in domestic production of "the next generation of technology" (chips, solar panels, electric vehicles, etc).

This is in no small part about protecting that ... and I don't think there's much in terms of negotiating that China could do here.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

I can't read it because of the paywall but IIRC (based on a similar article) that was such a nothing-burger issue.

People turned on an entirely optional (I think off by default setting) for some feature that allowed discovery of users by location ... and shocked pikachu they could be tracked or something like that.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

Server-side source code is a red herring. It's meaningless, it can't be verified.

The latter point is fair.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

Cloud source server or open source server, you can't know what server their running.

Pavel's whole argument here is basically the same thing for the client; "you can't verify the build in the app store matches what's in the source code, so you have no way of knowing it's actually what you're auditing."

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

I don’t know why Telegram users keep making excuses for that platform.

Honestly? Because the others are just so bad.

  • Element has an extremely clunky UX and uses Electron. The other Matrix app implementations are incomplete buggy messes.
  • Signal can't sync old messages to the desktop, uses a messy Electron interface, and lacks a bunch of features/polish I've come to expect.
  • Discord doesn't even pay lip service to privacy and uses a similarly doesn't invest in native apps.
  • Threema has been saying that cross-platform/multi-device connectivity is coming for like 2+ years and has had nothing but the most minor of unexciting features added.
  • WhatsApp is run by Meta, has a crappy desktop experience, and has had several serious security vulnerabilities.
  • Jami is ... extremely glitchy.
  • Session is basically Signal backed by a Crypto platform.

If someone took Telegram's UX and feature set and paired that with Signal's approach of "everything is encrypted", that would be a winner. I kinda hope someday Telegram just does that and moves everything to E2EE. When Telegram was launched E2EE for group chats/at scale wasn't really a thing ... now it's not nearly as novel but nobody has deployed E2EE with a feature set like Telegram's.

It’s not nothing if Telegram makes people believe they only share their location in a limited manner, but instead broadcast it to the whole world.

That's not even what happens by the way. It's just that you can spoof a device into random locations and eventually figure out where someone is.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

That's fair ... especially in the case of something Telegram like where the server is a major portion of the security model (for non-secret chats).

For truly private E2EE chats though the attacks on Telegram's lack of an open source server side (and Signal's presence of one) is fairly meaningless. If the client E2EE is correct and you're using a reproducible build the server, and even any MITM (man in the middle), shouldn't matter.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

A "toot" isn't a very persuasive piece of journalism.

I can verify that it absolutely impacts groups run by queer communities in the Gulf, because I was in one such group that was monitored and shut down by Etidal.

That claim needs a lot more investigation and context. At the very least, it needs investigated by a credible third party.

Also, do you even know what the feature you're criticizing is? A "channel"? Because it's not even really a part of the messaging portion of Telegram. It's basically an in-app blogging platform.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

That news article talks nothing about targeting groups unfairly and only talks about removal of extremist activity from what's a social media platform (which is standard practice for all social media platforms). Specially that article talks about targeting "combating the online propaganda of ISIS, Hay'at Tahrir Al-Sham, and Al-Qaeda" which I believe is uncontroversial for all decent and reasonable people.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

If that's your bar for gaslighting I hate to tell you I can just edit my messages all over the place to say things that were never said.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar
  • Signal can’t sync old messages to the desktop
  • Persistent voice rooms
  • Custom emoji
  • Animated emoji
  • Location sharing
  • Chat folders
  • Topics/rooms for larger group chats
  • Support for larger group chats
  • Quoted replies (i.e., quote part of a reply or create an arbitrary quote block)
  • Code snippets
  • Message forwarding
  • Polls
  • Animations in the UI
  • Detailed custom theming
  • Chat room theming
  • A content index (e.g., view only the files, links, videos, etc that were sent in this chat)
  • Group invite links to people you don't have in your contacts
  • Channels (i.e., micro-ish blogging)
  • A nice bot API
  • Subjective UI/UX changes to put things in more reasonable places (e.g, why can't I right click on a chat to pin it in the desktop client, why is the Electron menu bar shown by default)

And probably several other things I've forgotten because ... basically nobody I know is still using Signal.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

Signal's location share AFAIK can't be a live location share (which is useful during events like amusement park trips and stuff)

They have invite links to group chats? I don't know how that would work

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

I don't believe there's any issue actually building the app. However, the app store policies forbid them from shipping it or offering it as a side-loadable option.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

I wouldn't be so sure ... contributors are hard to come by. You need people with time AND the experience or strong desire to learn.

There are a lot more people that have ideas about making contributions than actually make contributions.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

Thanks for pointing that out! That's ... truly special 😂

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

This comment is the worst misrepresentation of penguins I've ever seen. It sounds like a red herring. It makes me want to vomit. People get away with this because nobody actually knows what penguins are. They just take what the media writes and accepts it as truth.

On a serious note, plenty of people here surely know what net neutrality is. Net neutrality is the guarantee that your ISP doesn't (de-)prioritize traffic or outright block traffic, all packets are treated equally. In other words it means you don't have to pay $5 extra for high speed access to Lemmy because Reddit and your ISP (say Comcast) would prefer Lemmy not exist.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

But tiktok the company is?

Yes, among other things they're also explicitly suppressing pro-Isreal content https://lemmy.world/post/14643617

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

Hm... I agree that Instagram is not a neutral source. I also agree that there are going to be some biases imposed by the user base.

I don't believe the US government plays a major role in Meta's content moderation behavior. Meta if anything has shown a reluctance towards any political or news content in recent years. That's not to say the US government doesn't have influence but their influence is (from what I've seen) oriented around fighting disinformation and threats of violence ... not cherry-picking the discussion of subject matter. I think there would've been a pretty significant leak out of Meta by now if there really was a strong political bias or government influence in content moderation.

I don't think any of these lines particularly fall along political lines within the US either. There are people on the left and right taking different sides on virtually all of the topics with statistical divergence; many of them are unusually bipartisan within the US.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

So, I took another look at the report, they did do this sort of statistical bias correction. See "U.S. Politics" page 8 https://networkcontagion.us/wp-content/uploads/A-Tik-Tok-ing-Timebomb_12.21.23.pdf

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

It sounds like just from how you describe things Swift is using fibers instead of real OS threads(?)

Seriously look at this comparison of DispatchQueue and OperationQueue

What are these things/what is this comparing?

[Thread, post or comment was deleted by the author]

  • Loading...
  • Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    Can't you still get the old counter strike by using the beta channels?

    And I mean... Ultimately blame Apple for being a pain in the butt and not supporting vulkan.

    Dark_Arc , (edited )
    @Dark_Arc@social.packetloss.gg avatar

    Wow the responses here are really off at the moment. I'm going to try and help.

    So, what you're going to want to do is add all the subdomain A records you need to you DNS (sounds like you're using cloudflare for that, not required, but that should be fine).

    Those DNS records are all going to be the same IP record, that's fine.

    What you need to do after that, so that you don't have to enter ports is a bit more complicated. For web servers, some kind of reverse proxy like nginx, haproxy, apache, etc is what you need. The term you're looking for is "virtual host".

    A virtual host setup is basically one where a reverse proxy looks at the domain name that was used to access the server over HTTP and then uses that to decide what server running on the machine you actually talk to.

    It's HTTP that actually is passing along the domain name you used, so if the service isn't HTTP you may or may not be able to do anything depending on the underlying protocol.

    So to recap:

    1. Set up your DNS records
    2. Set up an HTTP reverse proxy
    3. Add virtual hosts for each service you added a DNS record for to the reverse proxy (so that the reverse proxy can turn foo.example.com into example.com:xyz -- localhost:xyz in practice, morally example.com:xyz though -- behind the scenes)
    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    I've never used wildcard DNS, I'm not even sure that Namecheap DNS supports wildcard. But I've also never been in a situation where there's a dominate single machine I want my DNS to resolve to.

    After searching ... I'm not entirely sure I would use wildcard DNS https://serverfault.com/a/483625

    My preferred strategy is actually alias records and then one primary address record the alias records point to so if I change IPs I can move the machine. I forgot about that last night.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    Interesting; well it's good info/good to know it exist ... though, I'm probably going to stick to explicit listing. I like to be able to look at my DNS records and know what connects to what.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    I'll contest their is such a thing as good code. I don't think experienced devs always do the best job at passing on what works and what doesn't though. Universities certainly could do more software engineering/architecture.

    My personal take is that SRP (the single responsibility principle) is the #1 thing to keep in mind. In my experience DRY (do not repeat yourself) often takes precedence over SRP -- IMO because DRY is easy to (mis-)understand -- and that ends up making some major messes when good/reasonable code is rewritten into some ultra-compact (typically) inheritance or template-based mess that's "fewer lines of code, so better."

    I've never regretted using composition (and thus having a few extra lines and a little bit more boilerplate) over inheritance. I've similarly never regretted breaking down a function into smaller functions (even if it introduces more lines of code). I've also never regretted generalizing code that's actually general (e.g., a sum N elements function is always a sum N elements function).

    The most important thing with all of these best practices though is "apply it if it makes sense." If you're writing some code and you've got a good reason to have a function that does multiple things ... just write the function, don't bend over backwards doing something really weird to "technically" abide by the best practice.

    Dark_Arc , (edited )
    @Dark_Arc@social.packetloss.gg avatar

    I agree; I prefer a "hammer and chisel" strategy, I tend to leave things a little less precisely organized/factored earlier in the project and then make a some incremental passes to clean things up as it becomes more clear that what I've done handles all the cases it needs to handle.

    It's the same vein as the "don't prematurely optimize."

    Minimizing responsibilities of individual functions/classes/components is the only thing that I take a pretty hard line on. Making sure that I can reason about the code later and objectively say simple sentences like "given X this does Y." I want all the complex pieces to be isolated into their own individual smaller pieces that can be reasoned about.

    All of the code bases I've been in where I go "oh my god why", the typical reason is been because that's not true; when I'm in the function I don't know what it does because it does a lot of things depending on different state flags.

    Dark_Arc , (edited )
    @Dark_Arc@social.packetloss.gg avatar

    This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.

    Yup, this is part of what's lead me to advocate for SRP (the single responsibility principle). If you have everything broken down into pieces where the description of the function/class is something like "given X this function does Y" (and unrelated things thus aren't unnecessarily coupled) it makes reorganization of the higher level logic to fit the current requirements a lot easier.

    For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.

    Preach. DRY is IMO the most abused/mis-understood best practice particularly by newer programmers. DRY is not about compressing your code/minimizing line count. It's about ... avoiding things like writing the exact same general (e.g., a sort) algorithm inline in a dozen places. People are really good at finding patterns and "over fitting" making up abstractions that make no sense.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    Hmmm... That's true, my rough litmus test is "can you explain what this thing does in fairly precise language without having to add a bunch of qualifiers for different cases?"

    If you meet that bar the function is probably fine/doesn't need broken up further.

    That said, I don't particularly care how many functions I have to jump through or what their line count is because I can verify "did the function do the thing it says it's supposed to do?" after it's called in a debugger. If it did, then I know my bug isn't there. If it didn't, I know where to look.

    Just like with commits, I'd rather have more small commits to feed through git bisect than a few larger commits because it makes identifying where/when a contract/test case/invariant was violated much more straight forward.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    Yeah this one is ridiculous. There are some systems that have bounced my password ... literally the one stored in a password manager ... and gaslite me that I "must have forgotten my password."

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    Programming is mostly copy&paste

    I don't know what y'all are working on but these comments always scare me ...

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    Be careful with that one. I'm not sure about your experience level, but a mistake newer (and some more experienced) programmers often make is taking DRY too far.

    It's easy to "dry" something up to the point where it's spaghetti that's overly clever about how it reduces lines of code resulting in some crazy inheritance hierarchy even you (the author) are afraid to change a few years down the road.

    There are of course other times when someone just copy and pasted e.g. sort logic all over the code base ... but that sort of thing is relatively rare

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    I work on compilers (we can't/don't even have access to the C++ standard library in my case)... Most of the time, Google can't help me ⚰️😅

    It was definitely a bit more copy and paste when I was working on web applications... But even then, most of the code I was writing was fairly novel / more application and database architecture problems than trying tying libraries together.

    Dark_Arc , (edited )
    @Dark_Arc@social.packetloss.gg avatar

    Never trust the client, especially with information the player shouldn't have right now.

    This is a big part of the problem, but it's not the only problem. If you do all of that stuff right, you can't build a responsive first person shooter. There's some level of trust you need to put in the client.

    Disclaimer: This is based on my experience playing shooters and as a programmer. I have not worked on anticheat systems hands on.

    We see less and less of the "god mode" hacks where players can send the packet for a carpet bomb and the server just blindly trusts it. Or the ludicrous spinbots that spin at an extreme speed and headshot anyone that comes into line of sight.

    What we're seeing is increasingly sophisticated cheats that provide "buffs" to a player's ability. An AI enhanced aimbot that when you click gently nudges your hand to "auto correct" the shot and then clicks is borderline impossible to detect server side. It looks just like a player moved the mouse and fired.

    The "best" method to prevent these folks from cheating seems to be to detect the system or the game has been tampered with.

    Maybe the way to deal with that is to just let it happen and deal with smurfs down ranking... So these "soft" cheaters just exist in the "pro tier" where the pros can possibly stand a chance.

    One strategy I have seen that I wish more developers would do is sending "honeypot" information to the game client (like a player on the other side of the wall that isn't really there but an aimbot or a wall hack might incorrectly expose).

    Maybe the increasing presence of hardware cheats will result in new strategies that make these things unnecessary. I keep wondering if a TPM could be used to solve this problem someday... But I'm not sure exactly how/we may need faster TPMs.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    I think a part of it is the difference to losing to something "reasonable" vs "unreasonable."

    If you're clearly really bad at the game when we are in a fight with line of sight but somehow you keep picking off my teammates through walls... That's the kind of thing where cheating really starts to get annoying.

    You may still be on the same skill level overall, but for specific parts of the game they have super powers, and it just feels ridiculous.

    Smurfing is also a real issue because cheaters seem to overlap with trolls that just want everyone else to have a bad time, so they'll spend a bunch of time down ranking, so they can spend a little time giving a lot of players a bad day.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    That's all very fair

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    TPM is a joke in my mind

    I thought this at first as well, but they have an interesting property.

    They have a manufacturer signed private key. If you get the public key from the manufacturer of the TPM, you can actually verify that the TPM as it was designed by the manufacturer performed the work.

    That's a really interesting property because for the first time there's a way to verify what hardware is doing over the network via cryptography.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    You don’t necessarily need to detect the cheat itself, you can look at things like players having suddenly higher kill rates and put them into a queue for observation by either more advanced (more expensive) automation to look for cheating or eventually involve a human in the loop.

    That's true, if the player suddenly has higher kill rates. However, that doesn't work if they've been using the cheat from the start on that account. A sufficiently advanced AI powered aim bot would also be nearly indistinguishable from a professional player. Kind of similar to how Google created the CAPTCHA that uses mouse movement ... but had to go back to (at least in some cases) the additional old school captcha.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    Hmmm... I was going to say no because it's asymmetric crypto, but you're right if you are somehow able to extract the signed private key, you can still lie... Good point

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    I agree with this, but there are ways to make your "source code" not a file that you will modify.

    For instance you can have a file that's imported/included for configuration purposes that you yourself don't author... And I think that's okay.

    One of my favorite configuration languages for Python projects is actually just Python. It's remarkably nice. It's like having a YAML file you can script.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    Okay, that's pretty cool not going to lie. Granted, I'm not entirely sold on the idea of having a config format that's aimed at generating other config formats.

    That feels like (in most cases) a recipe for things getting out of sync between the latest version of the PKL and e.g. the JSON

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    I want one even more badly for thunderbird. It feels like such an obvious thing that's just ... missing.

    Standard notes: what about don’t put all your eggs in one basket rule?

    If the owner of the standard notes will now be a proton, doesn't that contradict this principle? I have a proton email account but I don't want it linked to my standard notes account. I don't strongly trust companies that offer packaged services like google or Microsoft....

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    Q: Can I get the information I put into Proton back out and move to another service without paying Proton any money or extreme hardship?

    A: Yes.

    Dark_Arc , (edited )
    @Dark_Arc@social.packetloss.gg avatar

    instead of just using an open protocol like XMPP they opted for their closed thing in order to lock people into their apps

    That's just not true, you're severely misinformed on this.

    Proton took the established practice of PGP encrypted email and put it in a nice package. That's why you can add public keys and just message somebody that's using Thunderbird.

    There is no "open protocol for end to end encrypted email", XMPP is not applicable here. There's no "IMAP for PGP" there's just IMAP, so they made a bridge so you can use IMAP even if your mail client doesn't support PGP.

    Could they have made an IMAP server that returns the PGP emails and requires your mail client to handle the decryption? Yes. However, that goes against a major selling point of the product which is that it manages all that encryption for you (like a password manager). Nobody in their right mind would use that.

    This isn't some matter of privacy coolaid and fanboyism; they did the open interoperable thing. You can even (as an example use case) if you're a new customer that was doing PGP email on your own, upload your own existing PGP key, and use that with Proton if you don't want to change the PGP public key people use to send you email.

    Edit: Perhaps you've been confused by some falsehoods coming from Tutanota or confused the two https://proton.me/blog/proton-vs-tuta-encryption

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    Because you're paying them so you don't have to do that. Why would you pay them a premium if you're just going to do it yourself anyways?

    Also that costs money to develop, maintain, and run. Which takes money/resources away from things most customers care about.

    There aren't red flags here, everything is open source, this is all verifiable information. You're just refusing to accept that.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    The phrase Jack of all trades master of none really only applies to people. A company can just hire more people when it has more products.

    Google's issue is not that they're "big" it's that they've failed to truly innovate and invest in anything in years. The current leadership kills anything that isn't an instant money maker despite the majority of the company's profitable products taking years to become profitable. They're also in a weird spot because their "magic" was always free services in exchange for advertising money and that's a model that's come under attack and been replicated to death by competitors.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    They can also lobby more effectively for privacy respecting legislation and privacy rights. I don't like lobbying, but so long as it's around, it would be nice to have a big privacy company that's as invested in that as the average privacy enthusiast.

    Dark_Arc ,
    @Dark_Arc@social.packetloss.gg avatar

    It's more like encrypted Evernote.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines