@dan@upvote.au avatar

dan

@dan@upvote.au

Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
d.sb
Mastodon: @dan

This profile is from a federated server and may be incomplete. View on remote instance

dan ,
@dan@upvote.au avatar

Isn't there some way to force Electron to use Wayland?

dan ,
@dan@upvote.au avatar

NVIDIA is likely to be stable on Wayland next month.

Do you have a source for that?

dan ,
@dan@upvote.au avatar

Thanks for the link :)

dan ,
@dan@upvote.au avatar

Unrelated to your question but which firewall app do you use?

dan OP ,
@dan@upvote.au avatar

That sounds reasonable to me.

It wouldn't help with the URL though. Maybe I could write a script that uploads the image then puts the right URL on the clipboard, and "share to" the script.

dan , (edited )
@dan@upvote.au avatar

My understanding is that 64-bit time fixes are only needed for 32-bit architectures, based on Debian's notes about the time_t migration project: https://wiki.debian.org/ReleaseGoals/64bit-time. 64-bit apps already have a 64-bit time_t, at least in Debian (and I assume Ubuntu too) with their standard compiler settings. It's mostly for 32-bit ARM CPUs. 64-bit architectures still need to be tested since build/code changes can unintentionally affect them too.

dan ,
@dan@upvote.au avatar

11th gen is just a few years old. Very different to trying to run something on a Core 2 Duo which is probably close to 20 years old.

dan , (edited )
@dan@upvote.au avatar

I remember them being exactly the dame many years ago

This is one of the reason I like Debian. They don't change stuff unless there's a good reason to. Network configuration on my Debian servers is in /etc/network/interfaces in mostly the same format it was in 20 years ago (the only difference today is that I'm dual-stack IPv4/IPv6 everywhere).

dan ,
@dan@upvote.au avatar

"ButterFS" is one of the accepted pronunciations though.

dan ,
@dan@upvote.au avatar

What type of data are you looking for? Does http://www.nirsoft.net/utils/network_usage_view.html suit your use case? There's similar data somewhere in the modern settings app too.

There's also performance counters for real time data (bytes sent and received): https://learn.microsoft.com/en-us/windows-server/networking/technologies/network-subsystem/net-sub-performance-counters. You can use these in any tool that supports performance counters. There's an app that comes with Windows called Performance Monitor that can read these counters.

dan ,
@dan@upvote.au avatar

Did you try the first app I linked to? I can't try it since I'm away from my computer for a few days.

Torrenting exposes your public IP. In a country where government doesn't care, does that pose a risk?

I honestly don't believe I will have any legal trouble because I don't do anything like cp or worse, I just pirate media I like, not even porn. But across users of communities, or on public trackers, is IP exposure something to be concerned about?

dan , (edited )
@dan@upvote.au avatar

The majority of VPNs are self-hosted. The most common use cases for a VPN are things like connecting to an employer's network when working from home, or connecting to your home server when away from home.

Commercial VPNs that route all your traffic through them aren't the usual VPN use case. They've become common mostly because people don't know how to use proxies, and they make it easy to ensure everything is routed via the VPN. A lot of use cases that people use VPNs for could really be solved with proxies.

dan ,
@dan@upvote.au avatar

If you're self-hosting a VPN that you're using for piracy, you'll still have an unique IP associated with you, and your hosting provider knows that you're using that IP. Doesn't that defeat the purpose?

dan , (edited )
@dan@upvote.au avatar

If you do use a VPN for torrenting, ensure it supports port forwarding. You won't be able to seed if the provider doesn't allow port forwarding. Sharing is caring :)

AirVPN is currently one of the best VPNs that support port forwarding, but there's some others that do, too. NordVPN doesn't support it. There's an old list here: https://old.reddit.com/r/VPNTorrents/comments/s9f36q/list_of_vpns_that_allow_portforwarding_2022/

dan ,
@dan@upvote.au avatar

How though? People that want the torrent can't connect to you if you're not forwarding a port.

dan ,
@dan@upvote.au avatar

Do seeds actively connect to peers even when the download is complete? I haven't used BitTorrent in a very long time, but it didn't used to do that.

dan ,
@dan@upvote.au avatar

A proxy is no less secure than a VPN, assuming it's using encryption like TLS. It's not as good for torrents since you can't port forward, but fundamentally people that use commercial VPNs are using then just like a proxy. Some providers like NordVPN do offer HTTPS proxies in addition to their VPN service.

dan ,
@dan@upvote.au avatar

A VPN can also have a faulty config. Everything depends on correct configs :)

dan ,
@dan@upvote.au avatar

The first time I tried another programming language, I was confused as to how to write code without using GOTO.

dan ,
@dan@upvote.au avatar

Unmanic is way easier to understand than Tdarr. I use it to transcode DVR recordings recorded using Plex and a HDHomeRun tuner. Digital TV uses MPEG2 which has pretty large file sizes.

dan ,
@dan@upvote.au avatar

What hardware doesn't support H.265?

dan , (edited )
@dan@upvote.au avatar

How many of those are you streaming video to, though?

Intel iGPUs have supported H265 since 7th gen, which is 8 years old now (released in 2016). Nvidia added support the same year, starting with the GTX1050. Even the Raspberry Pi 4 supports hardware-accelerated H265.

dan ,
@dan@upvote.au avatar

Google Toots and Google Toots (New)

dan ,
@dan@upvote.au avatar

don't know if I can run & debug .net 8 applications on a linux machine

The .NET SDK is cross-platform. Try install it then run dotnet run in the same directory as your project file (.csproj).

Most .NET APIs are cross-platform, but there's a few that still only work on Windows, and it's also possible to write code that only works on Windows, like using P/Invoke to call a Win32 API.

dan ,
@dan@upvote.au avatar

Usually I end up moving back to Windows because of font rendering. I far prefer Windows cleartype font rendering on 2160p desktop screens

I'm surprised this is still an issue. I remember it being an issue when I used desktop Linux 15 years ago. At the time, Linux devs didn't want to risk accidently infringing on Microsoft's ClearType patents, so the text smoothing techniques had to be completely different.

Those patents all expired in 2018.

dan ,
@dan@upvote.au avatar

when I switch back to windows after using Linux/Mac then it feels like someone fixed the focus and de-blurred everything.

I haven't used desktop Linux in a while, but I feel the same about MacOS font smoothing. It's way too blurry. I'm not sure why people like it.

Linux 6.10 To Merge NTSYNC Driver For Emulating Windows NT Synchronization Primitives ( www.phoronix.com )

Going through my usual scanning of all the "-next" Git subsystem branches of new code set to be introduced for the next Linux kernel merge window, a very notable addition was just queued up... Linux 6.10 is set to merge the NTSYNC driver for emulating the Microsoft Windows NT synchronization primitives within the kernel for...

dan ,
@dan@upvote.au avatar

I remember running FL Studio using WINE 15 years ago and it worked fine.

dan ,
@dan@upvote.au avatar

still do the scripting in Bash for portability reasons,

For what it's worth, Debian and most of its derivatives use dash (a Linux port of ash) instead of bash for /bin/sh. It's ~4x faster and uses much less RAM than Bash. Usually the only scripts that use Bash are scripts that aren't POSIX compliant or that use Bash-specific features.

dan , (edited )
@dan@upvote.au avatar

Of course Apple collect data. The reason they wanted to prevent other apps from collecting data was so only they can use their data, and their ad network could have an advantage over the others.

Yes, they have an ad network, and want to significantly expand it:

dan ,
@dan@upvote.au avatar

It's an old version, but even today you can still make new apps that use it, using modern development tools (the latest version of Visual Studio). It's missing a large number of newer features, but all the essential stuff is available.

dan ,
@dan@upvote.au avatar

Wow I totally forgot about the compact version. I wrote a few C# apps for Windows Mobile using the compact framework and it worked pretty well. I posted some to XDA-Developers too (e.g. https://xdaforums.com/t/app-htcbutton-beta-change-function-of-wired-headset-button-skip-songs-via-button.492947/).

dan ,
@dan@upvote.au avatar

Nope I lost interest hahaha

dan ,
@dan@upvote.au avatar

one point registration for multiple communities,

Federation, or at least some form of single sign-on with arbitrary providers (like we used to do with OpenID), is a better way of solving this.

dan ,
@dan@upvote.au avatar

I genuinely don't understand why some open source communities rely so heavily on Discord.

dan ,
@dan@upvote.au avatar

now to begin the slow search for another private community for the friend group to very slowly migrate to.

Just don't pick another proprietary platform again.

dan , (edited )
@dan@upvote.au avatar

In order to make it into a Discord or Zoom competitor you would need to solve far higher bandwidth things like HD video and low latency audio, and both of thouse are fundamentally very different things for a server to handle as compared to high latency short text messages.

A large number of Discord servers just use text.

For video, maybe integrate into something that already exists, like Jitsi? Instead of trying to build one single app that handles everything, maybe it would be nice to have a suite of apps that all work together and can all use the same login.

A lot of video conferencing systems are already mostly peer-to-peer, at least for enterprise apps. Skype was originally peer-to-peer too. NAT traversal is usually provided by STUN servers. There's some issues like that (for example it reveals the user's IP addresses) but you could proxy everything through a TURN server to solve that.

Peer to peer is the best way to implement end-to-end encrypted communication.

Having said that, very large groups can benefit from a client-server model, like what Zoom does.

dan ,
@dan@upvote.au avatar

I've never tried Matrix but I've heard good things about it.

dan ,
@dan@upvote.au avatar

Thanks! I'll have to see if there's Docker containers available. Ansible is definitely doable too, but I prefer Docker. I'll stick it on the same server I'm running Lemmy and Mastodon on :)

dan , (edited )
@dan@upvote.au avatar

most people are on discord

There's a lot of people on Discord (around 200 million monthly active users) but it's still the smallest out of all the major messaging services that support group chats. For example, Telegram has over double the number of users, and WhatsApp has 10x the users.

For open source projects in particular, something that integrates with Github and Gitlab login (like Gitter which is now powered by Matrix) is a better choice, as developers are practically guaranteed to have one of those accounts.

dan ,
@dan@upvote.au avatar

Ahh... Interesting!

Do you know how much RAM it needs? I have a spare VPS with 9GB RAM - is that sufficient? I could run it in a VM on my home server instead, too.

dan , (edited )
@dan@upvote.au avatar

Just tried out that playbook to set up a staging server, and it works pretty well.

I feel like it's a bit too magical though. I like knowing how all the software I'm using is installed and configured, and introducing another layer of abstraction makes that harder. I have particular ways things like my web server (Nginx), database servers, Let's Encrypt (certbot), etc are configured and want to keep things that way. I think I'll just use the Ansible playbook for the staging server, and set up the real server using the Docker containers directly, based on documentation from the upstream projects (Synapse, etc)

It looks like they have both Docker containers and Debian packages avaliable, so I'll have to see if it's worth using the Debian packages instead.

dan ,
@dan@upvote.au avatar

I want to keep using self-signed certs (my server is only reachable internally and I do not want to expose it to the internet). And the new server they use (I forgot which) didn't really have that option.

If you have your own domain name, you can get Let's Encrypt certificates for internal servers by using DNS challenges instead of HTTP challenges. I use subdomains like whatever.int.example.com for my internal systems.

Of course, it's possible that the Ansible playbook doesn't support that...

Thanks for the note about Python and the Debian packages. That's a good point. I'll definitely use the Docker containers.

dan ,
@dan@upvote.au avatar

ICANN doesn't run country code TLDs. I bought it through an aftermarket domain sale site (like Sedo).

I've actually got three of them. d.sb, d.sv and d.ls.

d.sb was around $4000 if I remember correctly.

KDE Plasma 6.0, and KDE Gear 24.02 released ( kde.org )

Today the KDE Community is announcing a new najor release of Plasma 6.0, and Gear 24.02. KDE Plasma is a modern, feature-rich desktop environment for Linux-based operating systems. Known for its sleek design, customizable interface, and extensive set of applications, it is also open source, devoid of ads, and makes protecting...

dan ,
@dan@upvote.au avatar

The default software was one of the main reasons KDE was created. The original creator didn't like that every app on their system seemed to use a different UI toolkit, and wanted a consistent appearance across everything.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines