nous

@nous@programming.dev

This profile is from a federated server and may be incomplete. View on remote instance

nous ,

So, if you just use the system API, then this means logging with syslog(3). Learn how to use it.

This is old advice. These days just log to stdout, no need for your process to understand syslog, systemd, containers and modern systems just capture stdout and forward that where it needs to do. Then all applications can be simple and it us up to the system to handle them in a consistent way.

NOTICE level: this will certainly be the level at which the program will run when in production

I have never see anyone use this log level ever. Most use or default to Info or Warn. Even the author later says

I run my server code at level INFO usually, but my desktop programs run at level DEBUG.

If your message uses a special charset or even UTF-8, it might not render correctly at the end, but worst it could be corrupted in transit and become unreadable.

I don't know if this is true anymore. UTF-8 is ubiquitous these days and I would be surprised if any logging system could not handle it, or at least any modern one. I am very tempted to start adding some emoji to my logs to find out though.

User 54543 successfully registered e-mail user@domain.com

Now that is a big no no. Never ever log PII data if you don't want a world of hurt later on.

2013-01-12 17:49:37,656 [T1] INFO c.d.g.UserRequest User plays {'user':1334563, 'card':'4 of spade', 'game':23425656}

I do not like that at all. The message should not contain json. Most logging libraries let you add context in a consistent way and can output the whole log line in Json. Having escaped json in json because you decided to add json manually is a pain, just use the tools you are given properly.

Add timestamps either in UTC or local time plus offset

Never log in local time. DST fucks shit up when you do that. Use UTC for everything and convert when displayed if needed, but always store dates in UTC.

Think of Your Audience

Very much this. I have seen far too many error message that give fuck all context to the problem and require diving through source code to figure out the hell went wrong. Think about how logs will be read without the context of the source code at hand.

nous ,

You lose information when DST kicks in - which is not great. It is trivial to convert to any timezone so there is little point in logging in anything but UTC and keeps everything consistent. Especially when comparing dates from servers in different timezones.

Are there any WYSIWYG html editors? just curious

Hello, i was looking for a wysiwyg html editors i could use for my personal website, perferrably just as a simple open source desktop program on linux (though anything else is fine). i DID find something called KompoZer but i was wondering if there's any other ones, thanks

nous ,

(I don't know why jamstack has taken over that site, but the list itself seems to be intact.)

Not really taken over, more just a rebranding. Both are owned by netlify, started off as a list of static site generators you could use with netlify (aka all of them they could find) but then they just rebranded the site and gave it a fancy name like you have with all the other web stacks you have these days.

nous ,

Systemd does a lot of things that could probably be separate projects,

I dont get the hate for this - Linux is full of projects that do the same thing: coreutils, busybox, kde, gnome, different office suites, even the kernel itself. It is very common for different related projects to be maintained together under the same project/branding with various different levels of integration between them. But people really seem to only hate on systemd for this...

nous ,

What standards? The old init systems were a loose collection of shell scripts that were wildly different on every distro. Other tools like sudo also broke the established standards of the time, before it you had to login as root with the root password.

Even gnome and KDE have their own themeing standards as well as other ways of doing things. Even network manager is its own standard not following things that came before it. Then there are flatpack, snaps and app images. Not to mention deb vs rpm vs pacman vs nix package formats. Loads of things in Linux userland have broken or evolved the standards of oldern times.

nous ,

Except desktop environments - they are far from a simple loosely collection of simple stuff. They coordinate your whole desktop experience. Apps need to talk to them a lot and often in ways specific to a single DE. Theming applications is done differently for every toolkit there is, startup applications (before systemd) is configured differently, global shortcuts are configured differently by each one... If anything it is something you interact with far more than systemd and has far more inconsistencies between each one. Yet few people complain about this as much as they complain about systemd.

Systemd is a giant mess of weirdly interdependent things that used to be simple things.

They used to be simple things back when hardware and the way we use computers were much simpler. Nowadays hardware and computers are much more dynamic and hotplugable and handle a lot more state that needs to persist and be kept track of. https://www.youtube.com/watch?v=o_AIw9bGogo is a great talk on the subject and talks about why systemd does what it does.

nous ,

Not technically. unetbootin and some similar tools like rufus take the USB, partition it, and copy the contents of the disk to it after manually setting up a bootloader on it. This is not required for most Linux ISOs though where you can just cp or dd the image directly to the USB as they are already setup with all that on the image. But other ISOs, like I believe Windows ones have a filesystem on them that is not vfat so cannot be directly copied. Although these days for windows you just need to format the USB as vfat and copy the contents of the windows ISO (aka the files inside it, not the iso filesystem) to the filesystem.

I tend to find unetbootin and rufus break more ISOs then they actually help with though. Personally I find ventoy is the better approach overall, just copy the ISO as a file to the USB filesystem (and you can copy multiple ones as well).

nous ,

Whatever language you chose you might want to also look at the htmx JS library. It lets you write in your html snippets better interactivity without actually needing to write JS. It basically lets you do things like when you click on an element, it can make a request to your server and replace some other element with the contents your server responds with - all with attributes on HTML tags instead of writing JS. This lets you keep all the state on the backend and lets you write more backend logic without only relying on full page refreshes to update small sections of the page.

For a backend language I would use rust as that is what I am most familiar with now and enjoy using the most. Most languages are adequate at serving backend code though so it is hard to go wrong with anything that you enjoy using. Though with rust I tend to find I have fewer issues when I deploy something as appose to other languages which can cause all sorts of runtime errors as they let you ignore the error paths by default.

nous ,

The Linux directory system is a single tree from the root /. You can mount any filesystem to any directory inside it to extend it and have all writes to that location be handled by that FS. This is all irrespective of what filesystem the is present at that location in the tree. It does not matter if it is BTRFS, ext4 or anything else mounting a filesystem into the directory structure is handled by the kernel separately from the FS implementation. So, yes, you can mount any partition that contains a filesystem to /home/user no matter what you have done with / or even /home.

But, any writes to that location will be handled by the filesystem driver for that partition. So any subvolumes or anything else the main filesystem/partition has wont be available inside that directory. You can have a BTRFS filesystem mounted there from a separate partition if you want. Though a big benefit of BTRFS is the ability to use subvolumes instead of full partitions so you are not segregating the space on the disk (ie, any subvolume can use what space it requires and you wont have one running out of space because you didn't make it large enough). So if you are going for BTRFS subvolumes I would just have one main partition and use subvolumes to split up the space if you wish. Though really the only benefit to that is you can snapshot them separately and I think you can set different quotas and settings on each one.

nous ,

subvolumes are integral to btrfs. You cannot have a layer like luks between them. You can encrypt the whole partition with luks before btrfs or you can encrypt specific directories after btrfs with something like encfs or truecrypt, though doing so loses some of the benefits of btrfs as it can no longer see your individual files.

If you wanted just /home encrypted with luks it would need to be a separate partition (which you could then have btrfs inside with subvolumns on that). Though IMO that gets a bit complicated - I would just opt for encrypting everything (except boot) on the root partition and have one btrfs fs on that partition with as many subvolumes inside that as you like.

nous ,

Ubuntu is a fork of unstable Debian packages. You don’t want unstable on your server!

Unstable does not mean crashes all the time. What makes them unstable on Debian is they can change and break API completely. But guess what, Ubuntu freezes the versions for their release and maintains their own security patches, completely mitigating that issue.

There are other reasons you might not want to use Ubuntu on a server but package version stability is not one of them.

nous ,

I don't know, it does have official in the name tag. People would not lie about that. Right?

nous ,

“Best practices” might help you to avoid writing worse code.

TBH I am not sure about this. I have seen many "Best practices" make code worst not better. Not because the rules themselves are bad but people take them as religious gospel and apply them to every situation in hopes of making their code better without actually looking at if it is making their code better.

For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.

Suddenly requirements change and now it’s bad code.

This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.

nous ,

Refactoring should not be a separate task that a boss can deny. You need to do feature X, feature X benefits from reworking some abstraction a bit, then you rework that abstraction before starting on feature X. And then maybe refactor a bit more after feature X now you know what it looks like. None of that should take substantially longer, and saves vast amounts of time later on if you don't include it as part of the feature work.

You can occasionally squeeze in a feature without reworking things first if time for something is tight, but you will run into problems if you do this too often and start thinking refactoring is a separate task to feature work.

nous ,

Yup, this is part of what’s lead me to advocate for SRP (the single responsibility principle).

Even that gets overused and abused. My big problem with it is what is a single responsibility. It is poorly defined and leads to people thinking that the smallest possible thing is one responsibility. But when people think like that they create thousands of one to three line functions which just ends up losing the what the program is trying to do. Following logic through deeply nested function calls IMO is just as bad if not worst than having everything in a single function.

There is a nice middle ground where SRP makes sense but like all patterns they never talk about where that line is. Overuse of any pattern, methodology or principle is a bad thing and it is very easy to do if you don't think about what it is trying to achieve and when applying it no longer fits that goal.

Basically, everything in moderation and never lean on a single thing.

nous ,

Yup, and that is because people only ever lean DRY coding by its name. It is never really what it originally meant, when to use it and more importantly when not to use it. So loads of people apply it religiously and over use it. This is true of all the popular catchy named methodologies/principals etc.

nous ,

So, basically every system then?

nous ,

A system us bloated when I feel it is bloated. It is highly subjective and there is no real line to cross. It is just more of a sliding scale, at one end there is no code on your system that you never use and at the other there is nothing on it that you ever want to use.

The former can likely on be reached on small microcontrollers where you have written everything exactly how you want it, and you would never even consider using the latter.

Realistically every system has things younever use, even the kernel has modules you will never load. And every non tiny program has features you never use. All of that is technically bloat but each instance I don't think makes your system or even an application feel bloated.

So really the question is when does the bloat bother you or get in your way. If you are trying to install an OS on a tiny embedded device where space is a premiumthenn you are going to draw that line at a different point to on the latest desktop with multi terabytes of storages and oodles of ram.

Anyone that claims there system has no bloat is technically lying to themselves. But if it makes them happy who cares? If your system has every package installed and it does not bother you at all thenitt does not matter at all.

nous ,

How do we know these are the AI chatbots instructions and not just instructions it made up? They make things up all the time, why do we trust it in this instance?

nous ,

The core is immutable, but it comes with flatpak which writes to a writeable location so you can install and update applications independently of OS updates without having them wiped after an upgrade. You can also install and use tools like distrobox to give you container environments that you can install and change as much as you like as well.

Tried Arch for the first time | My experience and impressions ( lemmy.ml )

I used linux intermittently in the last 15 or so years, migrating from early Ubuntu versions, to Manjaro, Pop!_OS, Debian, etc. And decided to give Arch a try just recently; with all the memes around its high entry point, I was really expecting to struggle for a long time to set it up just as I want....

nous ,

Unfortunately, I’ve never been able to really daily-drive Linux (and this Arch experiment is no exception). Don’t get me wrong: I love linux and the idea of having independent open-source and infinitely customizable OS. But unfortunately I professionally rely on some of the apps, that have no viable alternatives for Linux (PowerPoint, Photoshop, Illustrator, Proton Drive).

There are viable alternatives for Linux as you mentioned. But non are going to just be drop-in replacements for those tools. There are a lot of graphics design tools out there now that are just as powerful as Photoshop for what most people need. But the big issue is they are different in just enough ways that it can be a challenge to switch to them once you are used to the way Photoshop and the other windows only tools work. This is just something you are going to have to get over if you want to try Linux longer term.

But it can be far too much to switch all at once and with a completely new OS as well. So don't. Instead start using these tools and alternative on your Windows install now. Start trying out different ones (there are a lot, both open and closed source), and giving each a decent attempt to use. Start out with smaller side projects so you don't interrupt your main workflows and slowly over time start learning and getting used to the different way these other tools work. If you make some effort to do that while on Windows then the next time you try out Linux they wont seem as bad. But if you keep sticking with Windows only software on Windows you are going to find the same issue every time you try to switch.

nous ,

You dont even need a separate partition, just delete the non-home directories and reinstall. pacstrap might even do that for you 🤔 it has been a while since i last needed to reinstall. And most of the time you dont even need a full reinstall, Arch is trivial to fix most things from a live cd by partially following the install process - most often get a chroot and start reinstalling select packages/configs in some of the worst case scenarios.

nous ,

It seems to be geared toward people who want to constantly maintain there system

That is where your assumptions are wrong. It is for people that know how and want control over their setup. But after the initial setup maintenance is no worst that any other distro - simpler even in the longer term. Just update your packages and very occasionally manually update a config somewhere or run an extra command before hand (I honestly cannot remember the last time I even needed to do that much...). Far easier than needing to reinstall or fix a whole bunch of broken things after a major system upgrade that happens every few years on other distros.

People that like to tinker and break their system can do that on any distro. That does not mean it is high maintenance, quite the opposite in fact as it is easier to fix as Arch is generally easier to fix when you do break something (so does attract people that do like to tinker). But leave it alone and it wont just randomly break every week like so many people seem to think it does.

nous ,

The license is MIT, so should be fine for most things

Windows 11 vs Ubuntu vs Fedora 39 vs Arch Linux - Speed Test! ( youtu.be )

Even though different Linux distros are often fairly close in terms of real-life performance and all of them have a clear advantage over Windows in many use cases, we can't reject the fact that Arch Linux has undoubtedly won the competition. And now I'm so glad to have another reason to proudly say "I use Arch btw"...

nous ,

How the hell is arch so large? My laptop is only 27GB and that includes all user data and several years of crap being installed as well as several docker images. A fresh install should rival that fedora install.

This guy has a good take on linux companies, agree or disagree? ( www.youtube.com )

Kent right here talks about how Linux related companies need to focus on putting their resources towards collaborating and helping big companies port their software and THEN introduce open source software to new users instead of remaking desktop environments, pushing companies away, and overall doing the same thing over and...

nous ,

Oh, just invest in adobe and get it developed for Linux - easy, why didnt anyone think of this before. And better yet, if they do invest they could make it a PopOS exclusive!!?!?!! \s

It wont work because Adobe does not care and there is not enough market share in Linux for them to bother with it. No amount of money that PopOS has will be able to convince Adobe to develop it for Linux and there is no way in hell Adobe will give them access to their source to develop it for Linux. That whole argument is just a non-starter.

nous ,

Applications needs some coordination between each other in order to act like you would expect - things like one window at a time having focus and thus getting all keyboard and mouse inputs. As well as things like positioning on the screen and which screen to render to, the clipboard, and various others things.

X is a server and set of protocols that applications can implement to allow all this behaviour. X11 is the 11th version of the server and protocols. But X was also first created in 1984, and X11 since around 1987. Small changes have been made to X11 over the years but the last was in 2012.

Which makes it a very old protocol - and one which is showing its age. Advances in hardware since then and the way we use devices have left a lot to be desired in the protocol and while it has adapted a bit to keep up with modern tech it has not done so in the best of ways. I also believe its codebase is quite complex and hard to work with so changes are hard to do.

Thus is has quite a lot of limitations that modern systems are rubbing up against - for instance it does not really support multi cursors or input that is not a mouse and keyboard. So things like touch screens or pen/tablets tend to emulate a mouse and thus affect the only pointer X has. It is also not great at touchpads and things like touch pad gestures - while they do work, they are often clunky or not as flexible as some applications need.

It is also very insecure and has no real security measures in place - any GUI application has far more access to the system and input then it really requires. For instance; any application can screen grab the screen at any point in time - not something you really want when you have a banking web page open.

Wayland is basically a new set of protocols that takes more modern hardware and security practices in mind. It does the same fundamental job as X11, but without the same limitations X11 has and to fix a lot of the security issues with X.

One big difference with X though is that Wayland is just a protocol, and not a protocol and server like X. Instead it shifts the responsibilities of the X server into the window manager/compositor (which used to manage window placement and window borders as well as global effects such as any animations or transparency). It also has better controls over things like screen grabs so not every application can just grab a screen shot at once or register global shortcut keys or various things like that. Which for a while was a problem as screen sharing applications or even screenshot tools did not work - but over time these limitations have been added back in more secure ways than how X11 did them.

nous ,

Additionally any application using a GUI toolkit (like kde, qt or gtk etc) only needs to to update to a version that has native Wayland support. Which means most applications already support it. At least if they don't use any X11 APIs directly (which is not that common).

nous ,

Have you tried updating and rebooting your system? I have had this happen a few times and almost always that is what fixes it for me (more so the rebooting but it is generally good to have your system up to date). Other times it is typically something missing on your host system (like properly installed drivers), though if the game was running before then this is less likely to be the issue and a reboot is typically enough - so start with that.

nous ,

From the article:

A quick Microsoft demo video shows the Copilot key in between the cluster of arrow keys and the right Alt button, a place where many keyboards usually put a menu button, a right Ctrl key, another Windows key, or something similar. The exact positioning, and the key being replaced, may vary depending on the size and layout of the keyboard.

nous ,

Sockets are just streams of bytes - no defined structure to them at all. Dbus is about defining a common interface that everything can talk. That means when writing a program you don’t need to learn how every program you want to talk to talks over its own socket - just can just use a dbus library and query what is available on the system.

At least that is the idea - IMO its implementation has a lot to be desired but a central event bus IMO is a good idea. Just needs to be easy to integrate with which I think is what dbus fails at.

A great example is music player software - rather than every music player software creating its own socket and each having its own API to basically all do the same operations. So anything that want to just play/pause some music would need to understand all the differences between all the various different music applications. Instead with a central event bus system each music app could integrate with that and each application that wants to talk to a music app would just need to talk to the event bus and not need to understand every single music app out there.

nous ,

Anything is possible with sockets... and that is a meaningless statement. It is like saying you can build anything with bricks. And in an alternative universe we could have done so many things differently to solve the same problems. But we don't live there and in our universe dbus was the attempt to solve that problem among others. And yes you can create a standardization for music players easily enough - but what about notifications, and everything else? DBus tries to be a generic interface anything can talk over at a logical level - rather that just being the basic way two process can physically send bytes between each other.

nous ,

I have used vim/neovim for years and cannot go back to a non-modal editor. But TBH I got sick of its configuration. You need far too many plugins and config to get things into a sane working order to be usable on a day to day bases for any type of development. It takes ages to learn and become as productive as you were before and a lifetime to refine.

For the past year or so I have switched to helix and don’t plan on going back to vim/neovim as my main editor ever again. It is a modal editor that is a mix between Neovim and Kakoune editors. It comes with batteries included, and supports an IDE like experience out the box with treesitter syntax highlighting and LSP language integrations out the box. My whole config is like 6 lines long yet it works far better then my old neovim setup with a multitude of plugins and hundreds of lines of config. It is like what AstroNvim, LazyVim, LunarVim and NvChad etc are trying to do to vim/neovim but instead has built in support for all the things they rely on plugins for. Which means there is no need to constantly keep them up to date nor weird edge cases where one plugin does quite integrate with another smoothly. It is all built in so things are designed to work well together.

But it currently does lack any plugin support. So if something is not built in that you want you have to make due without it (well, except language support, adding new LSPs is not too hard). And plugin support is being worked on so even this will be a non-issue hopefully in a year or two.

nous ,

Interesting. Though I can definitely see where you’re coming from. Uhmm…, have you used any of the Neovim distributions to make maintenance easier?

I have, but dont like them. They all have weird install processes and need to manage their own set of configs on top of vim in your home dir. This makes them very hard to properly package or integrate with config management tools and require a different flow to keep them up to date from the rest of your system. They combine sometimes hundreds of plugins, of which only a few are designed to work together and while a lot don’t try to step on each others toes that many I often find issues in niche use cases. And when you do find an issue, or something you want to tweak you have 100s of plugin configurations that you need to learn about to figure out just what is doing what and which options you need to tweak.

It is all just far more hassle then I want out of my editor these days. Helix just works out the box and has basically everything I want from a editor nicely integrated into it.

As you’ve touched upon it; Helix’ keybindings and ‘sentence-structures’ are different to those found on Vi(m).

They are a little different and take a bit to get used to. But IMO I find them far nicer way to work. It is very nice being able to see what your action is going to effect before you do it - unlike in vim when you just hope you have hit the right movement keys. And it also pops up a small window for leader keys (like space) which show you what you can do with it making it far more discoverable then vim/neovim without needing to pour though hundreds of pages of manuals to even get a glimpse of what it can do or needing to go back to them to remember something that you dont use very often. It is not trying to be a 100% vim compatible layer, it is trying to give you the best experience it can out the box. And I think it does that quite well (at least once you get used to the new way of working - which does not take that long).

Furthermore, neither of the two have existed long enough to be able to profess any statement regarding their longevity. Like, there’s no guarantee that I can keep using either of the two 20 years into the future.

20 years is a long time. I can see it existing for the next 5 years at least, and looks to be on the trajectory to be a long lasting product. Though no one can say for sure. But, the more people using it the more likely it is to stick around for the long term. Just about everyone that I have seen use it over vim have highly praised it and it has quite a few contributors already (700+ on github), which is very impressive compared to vim (about 300), and neovim (more then 1100).

And keep in mind that vim has been around so long thanks to a single maintainer, Bram Moolenaar, who passed away this year. Which is not a great sign for vims future for the next 20 years.

I appreciate the input, but I simply don’t want to invest in a program whose future is very unclear to me at this point in time.

The investment in helix is far less then that you need to put into vim/neovim due to all the configuration you need for them. Well worth it for how active it currently is and how many people are putting effort into it.

nous ,

Interesting. Though I can definitely see where you’re coming from. Uhmm…, have you used any of the Neovim distributions to make maintenance easier?

I have, but dont like them. They all have weird install processes and need to manage their own set of configs on top of vim in your home dir. This makes them very hard to properly package or integrate with config management tools and require a different flow to keep them up to date from the rest of your system. They combine sometimes hundreds of plugins, of which only a few are designed to work together and while a lot don’t try to step on each others toes that many I often find issues in niche use cases. And when you do find an issue, or something you want to tweak you have 100s of plugin configurations that you need to learn about to figure out just what is doing what and which options you need to tweak.

It is all just far more hassle then I want out of my editor these days. Helix just works out the box and has basically everything I want from a editor nicely integrated into it.

As you’ve touched upon it; Helix’ keybindings and ‘sentence-structures’ are different to those found on Vi(m).

They are a little different and take a bit to get used to. But IMO I find them far nicer way to work. It is very nice being able to see what your action is going to effect before you do it - unlike in vim when you just hope you have hit the right movement keys. And it also pops up a small window for leader keys (like space) which show you what you can do with it making it far more discoverable then vim/neovim without needing to pour though hundreds of pages of manuals to even get a glimpse of what it can do or needing to go back to them to remember something that you dont use very often. It is not trying to be a 100% vim compatible layer, it is trying to give you the best experience it can out the box. And I think it does that quite well (at least once you get used to the new way of working - which does not take that long).

Furthermore, neither of the two have existed long enough to be able to profess any statement regarding their longevity. Like, there’s no guarantee that I can keep using either of the two 20 years into the future.

20 years is a long time. I can see it existing for the next 5 years at least, and looks to be on the trajectory to be a long lasting product. Though no one can say for sure. But, the more people using it the more likely it is to stick around for the long term. Just about everyone that I have seen use it over vim have highly praised it and it has quite a few contributors already (700+ on github), which is very impressive compared to vim (about 300), and neovim (more then 1100).

And keep in mind that vim has been around so long thanks to a single maintainer, Bram Moolenaar, who passed away this year. Which is not a great sign for vims future for the next 20 years.

I appreciate the input, but I simply don’t want to invest in a program whose future is very unclear to me at this point in time.

The investment in helix is far less then that you need to put into vim/neovim due to all the configuration you need for them. Well worth it for how active it currently is and how many people are putting effort into it.

nous ,

I shouldn’t expect remote accessing some random server will allow me to use Helix, right? Is there any other way to make this work? Or…, should I just learn both Vim and Helix’ Vim + Kakoune amalgamation?

That all depends on the server in question and if you can install things onto it or not. Some points to consider though:

  • If the server is restrictive on what you can install then you likely are stuck with basic vim or worst only vi - and without all your configs it is a very different beast of an editor anyway and something you will need to get used to everytime you jump on the server.
  • If you can install stuff to your home drive then it is quite easy to get helix running - it is a single binary with some language assets (requires one env var to point to them). So is trivial to get working from your home dir without a package manager.
  • IMO you should not be editing things on a server often enough to worry too much about what editors it has on it. Ideally with things like ansible you should not need an editor on it at all.

Vim is literally ubiquitous and plugins that enable its features can be found on almost any ‘platform’. It’s unrealistic to expect Helix’ adoption to be at that rate (yet). However, would you happen to know if at least the likes of VS Code and/or Jetbrains’ IDEs support it? And if so, how good their support/implementation is?

Do you mean vi input mode in other editors? That is one downside - you wont find it anywhere yet. Though since learning it I have not needed to go back to other IDEs like VS Code or Jetbrains, WIth inbuilt LSP support its language integration is just as good as VS Codes as it is working off the same essential language servers. Though it does seem that at least vscode has a plugin for kakoune keybindings which are more similar to helixs.

Though what you will find is a lot of the keys are very similar between vi and helix, so apart from the big action > movement vs selection > action and a few other things they dont feel too dissimilar from each other (things like basic movement, ie w for word, e for end of word, or text objects are essentially the same).

nous ,

But it happens all the time with other languages. Especially when that language is newer or in the headlines. NodeJS/Electron was a big one a few years ago. Ruby/On Rails a while before that, have seen it for python programs and way back in the day when java was all the rage.

Personally I think it does matter and as a end user I do care to come degree. It tells you some things about the program, like how it can be install/run what deps you might need, is it going to be a memory hog or possibly full of vulnerabilities. The language affects all of these things, more so when the projects are new or niche and have not been hardened over time or been properly packaged yet.

Personally I love it when a program is written in languages like rust or go as it means I know it is going to be easy to build/install and distribute given they build into single binaries and very easy to make static. But if I see one written in nodejs with electron I am disappointed as I know it is going to be a huge package that consumes large amounts of memory. Or if there is some python package that is not already packaged by my distro I would avoid it as I hate dealing with python dependencies and its virtualenvs.

And for this case, with redox. Well redox is not an application to be used by people. It is a showcase about what can be done in the language. It is not intended for most people that hear of it to ever run it or even want to run it. Yet is very impressive what they have managed to do in it. Including having parts written for it be able to work standalone in Linux and other OSs.

nous ,

I am not sure that is quite right. I dont think rust support just enabling dynamic linking of its dependencies. It can talk to dynamically linked libraries - which is how FFI works. And you can compile rust crates to be dynamically linked. But when you are going down this route you are talking over the C ABI. This requires some effort on the code author to make their APIs exportable to C types and means you lose all safety when talking over the C ABI.

I also dont think that rust inlines across a crate boundary unless the function is marked as inline or LTO is enabled - inlining across crate boundaries is expensive and so only done when explicitly needed or asked for it. It is more that you lose features like generics and traits and other things that are not supported over the C API.

nous ,

6 of the top 10 are verified or playable or 43% of the top 1000 games. But verified and playable is only a subset of the games that work, quite a few unsupported games do as well. If you go by medals the 7 of the top 10 are silver ranked or better (minor issues but generally playable) and 88% of the top 1000. So there are a lot of games that are playable that are still listed as unsupported on the deck.

You can see the numbers for various different things at www.protondb.com as well as different reports for all the games (including some tips on how to get things to work or work better).

nous ,

Yup, a big excuse I used to see a lot was

I would like to run Linux, but I want to game more so will stick to Windows

And this has changed a lot with what valve has done which opens Linux to a much larger market of people that can now use it for their usecases.

nous ,

You dont even need a separate partition, just dont format and dont delete the /home folder. You can even keep the /etc folder as well to keep system wide settings.

nous ,

I have done something similar following this post - loads of others have created similar scripted installers for Arch for their specific use cases and this guide takes it one step further with custom arch meta packages that hold deps and system wide config.

You can also do similar things with tools like ansible or saltstack or similar tools. Though these all take the approach of define your configs and system to automate the setting up of a system approach rather than the backup or clone an existing system. So are more effort initially but are able to keep multiple system in sync with system configs with far less effort then trying to create a backup/restore system for organically created configs.

that wouldn’t work (I think) because my laptop has vastly different hardware

Should not matter, you can install all the packages all your system need - such as both nvidia, amd and intel graphics drivers and the kernel will only load the ones for the hardware you have booted with. Or if you really need different configs or packages for different systems the various approaches have ways to do that.

nous ,

Most done with the latter. But the nice thing is once you have done it once it is much easier to keep things up to date and in sync from then on words. You can also peace meal it - setup one application at a time and migrate things one by one over to it.

painstakingly manually code every unique facet

That makes it sound a lot worst then it actually it. It is only a bit more effort then setting something up for the first time manually. And pays its self back many times over when you next need to reinstall or install a new system. Assuming you keep up with making changes to the code and not directly to your system each time.

As a beginner, how should I go about learning difficult concepts?

I’m trying to learn programming and something I struggle with the most is trying to separate code mentally into chunks where I can think through the problem. I’m not really sure how to describe it other than when I read a function to determine what it does then go to the next part of the code I’ve already forgotten how the...

nous ,

I’m still forgetting things I learned 3 or even 4 times like how to do a for each loop.

I have been programming for decades now and still have to look up how an if statement works in bash - or other similar things, especially when switching between languages. It takes 5 seconds to look up and remember so I would not bother worrying about it. Far better to know when you need to use an if or for loop and quickly look up the syntax then to know the syntax but not when to use it.

I see tutorials say to make a tic tac toe game or a calculator or to contribute to open source code. Which is good I suppose but all of it feels too advanced and I get lost on how to begin.

Break problems down into simpler problems, then break those down into simpler problems until you have a trivial problem you can solve. Then build up from there. Like take a tic tac toe game - lots of things to consider that can all be dealt with in isolation. Like rendering the game to the screen, that is one problem you can start with, and can be broken down even further to maybe how to draw a grid to the screen, which again can be broken down to how to draw a box or line.

You might even want to look at the book " Think Like a Programmer: An Introduction to Creative Problem Solving by V. Anton Spraul" which goes into this way of thinking in more detail.

nous ,

Rust, it is a pleasure to work with and far more flexible in where/what it can run then a lot of languages. Good oneverything from embedded systems to running on the web. Only really C and C++ can beat it on that, but those are farlesss pleasant to work with. Even if it is not as mature in some area quite yet, it just gets more support for things as time goes on.

nous ,

I would start by learning rust at a user level via the rust book to get you familiar with the language without the extra layers that embedded systems tend to add on top of things. Keep in mind that the embedded space in rust is still maturing, though it is maturing quite quickly. However one of the biggest limitations ATM is the amount of architectures available - if you need to target one not supported then you cannot use rust ATM (though there is quite a few different projects bringing in support for more architectures).

If you only need to use architectures that rust supports than once you have the basics of rust down take a look at the embedded book, the Discovery book and the Embedonomicon. Then there are various crates for different boards such as ruduino for the arduino uno, or the rp-pico for the raspberry pi pico, or various other crates for specific boards. There are also higher and lower level crates for things - like ones specific to a dev board and ones that are specific to a chipset.

Lastly, there are embedded frameworks like Embassy that are helpful for more complex applications.

nous ,

I set it up this way so that if I need to reinstall Linux, I can just overwrite / while preserving /home and just keep working after a new install with very few hiccups.

Even with a single partition for / and /home you can keep the contents of /home during a reinstall by simple not formatting the partitions again. I know when I tried years ago with Ubuntu years ago the installed asked if I wanted to remove the system folders for you. But even if the installer does not you can delete them manually before hand. Installers wont touch /home contents if you don’t format the drive (or any files outside the system folders they care about).

Though I would still backup everything inside /home before any attempt at a reinstall as mistakes do happen no matter what process you decide to go with.

nous ,

There was no option per say, at least on the ubuntu installed I tried many years ago. Just a popup that happened sometime before the install but after the manual partitioning if the root partition had folders like /etc /usr /var etc that were needed by the installer. Not sure if all installers do this - but I would suspect if they didnt you can just delete the folders manually before you enter the installer and pick manual partitioning option and opt to not format any partitions.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines