stevecrox

@stevecrox@kbin.run

This profile is from a federated server and may be incomplete. View on remote instance

stevecrox ,

QT is a cross platform UI development framework, its goal is to look native to the platform it operates on. This video by a linux maintainer from 2014 explains its benefits over GTK, its a fun video and I don't think the issues have really changed.

Most GTK advocates will argue QT is developed by Trolltech and isn't GPL licensed so could go closed source! This argument seems to ignore open source projects use the Open Source releases of QT and if Trolltech did close source then the last open source would be maintained (much like GTK).

Personally I would avoid Flutter on the grounds its a Google owned library and Google have the attention span of a toddler.

Not helping that assessment is Google let go of the Fuschia team (which Flutter was being developed for) and seems to have let go a lot of Flutter developers.

Personally I hate web frontends as local applications. They integrate poorly on the desktop and often the JS engine has weird memory leaks

stevecrox ,

You do, but considering the scales they process data I suspect Google would be better building Go tooling (or whatever the dominate internal language is).

A few years back I was trying to teach some graduates the importance of looking at a programming language ecosystem and selecting it based on that.

One of my comparison projects was Apache OpenNLP/Camel vs Flask/Spacy.

Spacy is the go to for NLP, I expected it to be either quicker to develop, easier to use, better results or just less resources.

I assigned Grads with Java experience Spacy and Python experience OpenNLP.

The OpenNLP guys were done first, they raved about being able to stream data into the model and how much simplier it made life.

When compared with the same corpus (Books, Team emails, corporate sharepoint, dev docs, etc..) OpenNLP would complete on 4GiB of RAM in less than a second on 0.5vCPU. Spacy needed 12GiB and was taking ~2 seconds with 2vCPU. They identified the same results...

Me and a few others ended up spending a day reading the python and trying to optimise it, clearly the juniors had done something daft, they had not.

It rather undermined my point.

stevecrox ,

If you have the freedom try Typescript.

The tsx files are almost identical to jsx except for the need to define the field types your ingesting.

While thats a little extra work, it allows Visual Studio Code to perform deeper analysis and provide much more helpful contextual hints.

I grew to love JSX and tried TSX out of interest and you couldn't convince to go back to pure JS

stevecrox ,

It was a mixture of factors.

Data was to be dumped into a S3 bucket (minio), this created an event and anouther team had built an orchestrator which would do a couple of things but eventually supply an endpoint a reference to a plain/txt file for analysis.

For the Java devs they had to [modify the example camel docs.](https://camel.apache.org/manual/rest-dsl.html) and use the built in jackson library to convert the incoming object to a class. This used the default AWS S3 api to create a stream handle which fed into the OpenNLP docs. .

The Python project first hit a wall in setting up Flask. They followed the instructions and it didn't work from setup tools. The Java team had just created a new maven project from the Intelij but the same approach didn't work for the Python team using pycharm. It lost them a couple days, I helped them overcome it.

Then they hit a wall with Boto3, the team expected to stream data but Boto3 only supports downloading, there was also a complexity issue the AWS SDK in Java waa about 20 lines to setup and a single line to call, it was about 50 lines in Python. On the positive side I got to explain what all the config meant in S3.

This caused the team anouther few days of delay because the team knew I used a 350MiB Samsung TV guide to test the robustness and had to go learn about Docker volume mounting and they thought they needed a stateful kubernetes service and I had to explain why that was wrong.

Basically Python threw up a lot of additional complexity and the docs weren't as helpful as they could have been.

stevecrox , (edited )

My expectation is whatever the solution it needs to dockerise and be really easy to deploy via docker compose or Kubernetes so people can quickly and easily set up their own.

The front end is effectively static files so I would probably choose Apache or Express (whichever gives me a smaller docker image)..

For the backend I would choose Java for Spring Boot. An Alpine image with OpenJDK and the app is tiny. Spring has a library for every kind of interface making them trivial to implement but the main reason is hibernate.

Hibernate (now Spring Data) was the first library for being able to switch out databases without having to change code (its all config). A lot of mastodon instances struggle with the resource requirements of elastic search so letting small instances use something like postgres would seem ideal.

I have noticed Go/Rust still expect you to write or manage a lot of stuff Spring gives away for free. Python is ok if your backend is really tiny but there is a lot of boilerplate in how Python libraries work so complex projects get hard to manage and I assume interacting with the fediverse will add complexity.

I dislike wayland

Quite the unpopular opinion, but I just wanted to post this to show the silent majority that we still exist. We have reached a point where voicing criticism against wayland is treated like the worst thing ever and leads you to being censored and what not. The red hat funded multi year long shill campaign has proven to be quite...

stevecrox ,

Immutable distributions won't solve the problem.

You have 3 types of testing unit (descrete part of code), integration (how a software piece works with others) and system testing (e.g. the software running in its environment). Modern software development has build chains to simplify testing all 3 levels.

Debian's change freeze effectively puts a known state of software through system testing. The downside its effecitvely 'free play' testing of the software so it requires a big pool of users and a lot of time to be effective. This means software in debian can use releases up to 3 years old.

Something like Fedora relies on the test packs built into the open source software, the issue here is testing in open source world is really variable in quality. So somethinng like Fedora can pull down broken code that passes its tests and compiles.

The immutable concept is about testing a core set of utilities so you can run the containers of software on top. You haven't stopped the code in the containers being released with bugs or breaking changes you've just given yourself a means to back out of it. It's a band aid to the actual problem.

The solution is to look at core parts of the software stack and look to improve the test infrastructure, phoronix manages to run the latest Kernel's on various types of hardware for benchmarking, why hasn't the Linux foundation set up a computing hall to compile and run system level testing for staged changes?

Similarly website's are largely developed with all 3 levels of testing, using things like Jest/Mocha/etc.. for Unit/Integration testing and Robots/Cypress/Selenium/Storybook/etc.. for system testing. While GTK and KDE apps all have unit/integration tests where are the system level test frameworks?

All this is kinda boring while 'containers!' is exciting new technology

stevecrox , (edited )

The developer behind KBin seems to have issues delegating/accepting contributors.

If you look at the pull requests, most have been unreviewed for months and he tends to regularly push his branches once complete and just merge them in.

That behaviour drove the MBin fork, where 4-5 people were really keen to contribute but were frustrated.

To some extent that would be ok, its his project and if he doesn't want to encourage contributions that is his decision but...

KBin.social has gotten to the size where it really should have multiple admins (or a paid full time person). Which it doesn't have.

The developer has also told us he has gone through a divorce, moved into his own place, gotten a full time job and now had surgery.

Thats a lot for any normal person and he is going through that while trying to wear 2 hats (dev & ops) each of which would consume most of your free time.

Personally I moved to kbin.run which is run by one of the MBin devs

stevecrox ,

MBin is a fork by a group who tried to push into KBin but couldn't. There seems to be at least 4 active committers and stuff gets merged.

You will see a number of the KBin instances moved over https://fedidb.org/software/mbin

stevecrox ,

Technical Leads are not rational beings and lots of software is developed from an emotional stand point.

Engineering is trade offs, every technical decision you make has a pro/con.

What you should do is write out the core requirements/constraints.Then you weigh the choices to select the option that best meets it.

What actually happens is someone really likes X framework, Y programming language or Z methodology and so decides the solution and then looks for reasons to justify it.

Currently the obvious tell is if they pitch Rust. I am not saying Rust is bad, but you'll notice they will extoll the memory safety or performance and forget about the actual requirements of the project.

stevecrox ,

The team/organisations knowledge is a huge factor but its easy to fall into a trap where no matter what the problem is the solution is X language.

If I have an organisation that knows C# and we need to build a Web Application. I would suggest we need to learn Node.js and Typescript and not invest in a solution that turns C# into web pages.

Has anyone here built a Beowulf Cluster? ( spinoff.nasa.gov )

A university near me must be going through a hardware refresh, because they've recently been auctioning off a bunch of ~5 year old desktops at extremely low prices. The only problem is that you can't buy just one or two. All the auction lots are batches of 10-30 units....

stevecrox ,

Docker swarm was an idea worse than kubernetes, that came out after kubernetes, that isn't really supported by anyone.

Kubernetes has the concept of a storage layer, you create a volume and can then mount the volume into the docker image. The volume is then accessible to the docker image regardless of where it is running.

There is also a difference between a volume for a deployment and a statefulset, since one is supposed to hold the application state and one is supposed to be transient.

stevecrox ,

So I know thats a joke but...

With Java 11's inclusion of 'var' I have successfully copied JavaScript code into Java without needing to change anything.

I judge the direction Java is going in

stevecrox , (edited )

It isn't a good move.

A domain name can cost as little as £10, similarly most email services cost ~£5-£15 per person per month. Its normally pretty easy to link a domain to an email provider and doesn't cost anything other than time.

If a company can't be bothered to implement the most basic online branding people will make their assumptions and some will filter your company out because of it. With the cost to implement so low (e.g. £160 per year), even the loss/gain of a single customer would justify it.

stevecrox ,

The splash screen (boot screen instead of text)used to get me. It provided by an application called 'Plymouth'.

You used to need to install it and configure grub, however I think if you go into 'System Settings' and type 'Splash' KDE has an option to install and choose the screen

stevecrox ,

Wine attempts to translate Windows calls into Linux, its developed by Codeweavers whose focus is/was application compatibility.

Valve took Wine and modify it to best support games, the result is called Proton. For example:

Someone built a library to convert DirectX 9-11 calls and turn them into Vulkan ones, it was written in C++ and is called DxVK.

Wine has strict rules on only C code and their directx library handles odd behaviour from old CAD applications.

Valve doesn't care about that, they care that the Wine DirectX library is slow and buggy and DxVK isn't. So they pull out Wines and use DxVK.

There are lots of smaller changes, these are 'Proton Fixes', sometimes Proton Fixes are passed on to Wine. Sometimes they can't but discussion happens and a Wine fix is developed.

stevecrox ,

Pirate Trainer & Uru: Ages Beyond Myst

I remember trying Pirate Trainer in a Nvidia game booth when VR was new. It was incredible, years later I get a VR headset and its the free game. I don't understand how no one has improved upon it.

Uru was the first puzzle game I thought struck a good balance between physical and mental puzzles. They were set at a level that felt challenging but not impossible and laid out so you alternated really nicely. Myst Online actually went backwards in this

stevecrox ,

Debian isn't old == stable, its tested == stable.

Debian has an effective Rolling distribution through testing than can get ahead of Arch.

At some point they freeze the software versions in testing and look for Release Critical and Major bugs. Once they have shaken everything and submitted fixes where possible. It then becomes stable.

The idea is people have tested a set baseline of software and there are no known major bugs.

For the 4-5 releases Debian has released every 2 years (Similar to Ubuntu LTS). Debian tends to align its release with LTS Kernel and Mesa releases so there have been times the latest stable is running newer versions than Ubuntu and the newest software crown switches between Ubuntu LTS and Debian each year.

For some the priority to run software that won't have major bugs, that is what Debian, Ubuntu LTS and RHEL offer.

stevecrox ,

I suspect they mean around packaging.

I honestly believe Red Hat has a policy that everything should pull in Gnome. I have had headless RHEL installs and half the CLI tools require Gnome Keyring (even if they don't deal with secrets or store any). Back in RHEL 7, Kate the KDE based Text Editor pulled in a bunch of GTK dependencies somehow.

Certification is really someone paid to go through a process and so its designed so they pass.

Think about the people you know who are Agile/Cloud/whatever certified and how all it means is they have learnt the basic examples.

Its no different when a business gets certified.

The only reason people care is because they can point to the cert if it all goes wrong

stevecrox ,

I wouldn't use "certified" in this context.

Limiting support of software to specific software configurations makes sense.

Its stuff like Debian might be using Python 3.8 Ubuntu Python 3.9, OpenSuse Python 3.9, etc.. Your application might use a Python 3.9 requiring library and act odd on 3.8 but fine on 3.7, etc.. so only supporting X distributions let you make the test/QA process sane.

This is also why Docker/Flatpack exist since you can define all of this.

However the normal mix is RHEL/Suse/Ubuntu because those target businesses and your target market will most likely be running one.

stevecrox ,

Most of the updates are about long term support the performance gains are a side product.

This driver was one of the earliest open source drivers developed by AMD. The point of the driver is to convert OpenGL (instructions games give to draw 3D shapes) into the low level commands a graphics card uses.

A library (TMSC I think) was written to do this, however they found OpenGL commands often relied on the results of others and converting back to OpenGL was really CPU expensive.

So someone invented NIR, its an intermediate layer. You convert all OpenGL commands to NIR and it uses way less CPU to convert from NIR to GPU commands and back.

People in their spare time have been updating the old AMD drivers so they use the same libraries, interfaces, etc.. as the modern AMD drivers.

This update removes the last of the TMSC? usage so now the driver uses only NIR.

From a dev perspective everything now works the same way (less effort) from a user perspective those old cards get the performance bump NIR brought.

stevecrox ,

No, don't use Sid. No one should run it on a system they expect to work.

Debian has 3 phases stable, testing & unstable.

Debian Unstable is the initial gate for pulling in new code, applications need to not break everything in that environment before they can be moved to testing. A freeze is periodically applied to testing and RC/Major bugs are identified/fixed and Stable is released

Sid is the naughty child in Toy Story who destroys things. Debian uses Toy Story characters to name things and so Unstable got the nickname Sid.

If you have newly released hardware you might need an updated kernel. This can be found via backports.

Similarly Mesa covers the graphics drivers, you can pull the latest from backports, again you only need to do this if your graphics card is too new.

As someone who runs Debian Stable with KDE, it works great for gaming

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines