@theluddite@lemmy.ml avatar

theluddite

@theluddite@lemmy.ml

I write about technology at theluddite.org

This profile is from a federated server and may be incomplete. View on remote instance

theluddite , (edited )
@theluddite@lemmy.ml avatar

All these always do the same thing.

Researchers reduced [the task] to producing a plausible corpus of text, and then published the not-so-shocking results that the thing that is good at generating plausible text did a good job generating plausible text.

From the OP , buried deep in the methodology :

Because GPT models cannot interpret images, questions including imaging analysis, such as those related to ultrasound, electrocardiography, x-ray, magnetic resonance, computed tomography, and positron emission tomography/computed tomography imaging, were excluded.

Yet here's their conclusion :

The advancement from GPT-3.5 to GPT-4 marks a critical milestone in which LLMs achieved physician-level performance. These findings underscore the potential maturity of LLM technology, urging the medical community to explore its widespread applications.

It's literally always the same. They reduce a task such that chatgpt can do it then report that it can do to in the headline, with the caveats buried way later in the text.

theluddite , (edited )
@theluddite@lemmy.ml avatar

I completely and totally agree with the article that the attention economy in its current manifestation is in crisis, but I'm much less sanguine about the outcomes. The problem with the theory presented here, to me, is that it's missing a theory of power. The attention economy isn't an accident, but the result of the inherently political nature of society. Humans, being social animals, gain power by convincing other people of things. From David Graeber (who I'm always quoting lol):

Politics, after all, is the art of persuasion; the political is that dimension of social life in which things really do become true if enough people believe them. The problem is that in order to play the game effectively, one can never acknowledge this: it may be true that, if I could convince everyone in the world that I was the King of France, I would in fact become the King of France; but it would never work if I were to admit that this was the only basis of my claim.

In other words, just because algorithmic social media becomes uninteresting doesn't mean the death of the attention economy as such, because the attention economy is something innate to humanity, in some form. Today its algorithmic feeds, but 500 years ago it was royal ownership of printing presses.

I think we already see the beginnings of the next round. As an example, the YouTuber Veritsasium has been doing educational videos about science for over a decade, and he's by and large good and reliable. Recently, he did a video about self-driving cars, sponsored by Waymo, which was full of (what I'll charitably call) problematic claims that were clearly written by Waymo, as fellow YouTuber Tom Nicholas pointed out. Veritasium is a human that makes good videos. People follow him directly, bypassing algorithmic shenanigans, but Waymo was able to leverage their resources to get into that trusted, no-algorithm space. We live in a society that commodifies everything, and as human-made content becomes rarer, more people like Veritsaium will be presented with more and increasingly lucrative opportunities to sell bits and pieces of their authenticity for manufactured content (be it by AI or a marketing team), while new people that could be like Veritsaium will be drowned out by the heaps of bullshit clogging up the web.

This has an analogy in our physical world. As more and more of our physical world looks the same, as a result of the homogenizing forces of capital (office parks, suburbia, generic blocky bulidings, etc.), the fewer and fewer remaining parts that are special, like say Venice, become too valuable for their own survival. They become "touristy," which is itself a sort of ironically homogenized commodified authenticity.

edit: oops I got Tom's name wrong lol fixed

theluddite ,
@theluddite@lemmy.ml avatar

Haha I was actually paraphrasing myself from last year, but I've seen that because lots of readers sent me that article when it came out a few months later, for obvious reasons!

theluddite ,
@theluddite@lemmy.ml avatar

This has been ramping up for years. The first time that I was asked to do "homework" for an interview was probably in 2014 or so. Since then, it's gone from "make a quick prototype" to assignments that clearly take several full work days. The last time I job hunted, I'd politely accept the assignment and ask them if $120/hr is an acceptable rate, and if so, I can send over the contract and we can get started ASAP! If not, I refer them to my thousands upon thousands of lines of open source code.

My experience with these interactions is not that they're looking for the most qualified applicants, but that they're filtering for compliant workers who will unquestioningly accept the conditions offered in exchange for the generally lucrative salaries. It's the kind of employees that they need to keep their internal corporate identity of being the good guys as tech goes from being universally beloved to generally reviled by society in general.

theluddite , (edited )
@theluddite@lemmy.ml avatar

I have worked at two different start ups where the boss explicitly didn't want to hire anyone with kids and had to be informed that there are laws about that, so yes, definitely anti-parent. One of them also kept saying that they only wanted employees like our autistic coworker when we asked him why he had spent weeks rejecting every interviewee that we had liked. Don't even get me started on people that the CEO wouldn't have a beer with, and how often they just so happen to be women or foreigners! Just gross shit all around.

It's very clear when you work closely with founders that they see their businesses as a moral good in the world, and as a result, they have a lot of entitlement about their relationship with labor. They view laws about it as inconveniences on their moral imperative to grow the startup.

‘The tide has turned’: why parents are suing US social media firms after their children’s death ( www.theguardian.com )

While social media firms have long faced scrutiny from Congress and civil rights organizations over their impact on young users, the new wave of lawsuits underscores how parents are increasingly leading the charge, said Jim Steyer, an attorney and founder of Common Sense media, a non-profit that advocates for children’s online...

theluddite ,
@theluddite@lemmy.ml avatar

Whenever one of these stories come up, there's always a lot of discussion about whether these suits are reasonable or fair or whether it's really legally the companies' fault and so on. If that's your inclination, I propose that you consider it from the other side: Big companies use every tool in their arsenal to get what they want, regardless of whether it's right or fair or good. If we want to take them on, we have to do the same. We call it a justice system, but in reality it's just a fight over who gets to wield the state's monopoly of violence to coerce other people into doing what they want, and any notions of justice or fairness are window dressing. That's how power actually works. It doesn't care about good faith vs bad faith arguments, and we can't limit ourselves to only using our institutions within their veneer of rule of law when taking on powerful, exclusively self-interested, and completely antisocial institutions with no such scruples.

theluddite ,
@theluddite@lemmy.ml avatar

I do software consulting for a living. A lot of my practice is small organizations hiring me because their entire tech stack is a bunch of shortcuts taped together into one giant teetering monument to moving as fast as possible, and they managed to do all of that while still having to write every line of code.

In 3-4 years, I’m going to be hearing from clients about how they hired an undergrad who was really into AI to do the core of their codebase and everyone is afraid to even log into the server because the slightest breeze might collapse the entire thing.

LLM coding is going to be like every other industrial automation process in our society. We can now make a shittier thing way faster, without thinking of the consequences.

'Scripture is very clear': New House Speaker tells Congress God has 'ordained' them ( www.alternet.org )

Republican Speaker of the House Mike Johnson, in his first remarks after being elected Wednesday afternoon, told Members of Congress that “Scripture” and “the Bible” are clear that they have been “ordained” by God.

How the “Surveillance AI Pipeline” Literally Objectifies Human Beings ( www.404media.co )

The vast majority of computer vision research leads to technology that surveils human beings, a new preprint study that analyzed more than 20,000 computer vision papers and 11,000 patents spanning three decades has found. Crucially, the study found that computer vision papers often refer to human beings as “objects,” a...

theluddite ,
@theluddite@lemmy.ml avatar

I am totally in favor of criticizing researchers for doing science that actually serves corporate interests. I wrote a whole thing doing that just last week. I actually fully agree with the main point made by the researchers here, that people in fields like machine vision are often unwilling to grapple with the real-word impacts of their work, but I think complaining that they use the word “object” for humans is distracting, and a bit of a misfire. “Object detection” is just the term of art for recognizing anything, humans included, and of course humans are the object that interests us most. It’s a bit like complaining that I objectified humans by calling them a “thing” when I included humans in “anything” in my previous sentence.

Again, I fully agree with much of their main thesis. This is a really important point:

As co-author Luca Soldaini said on a call with 404 Media, even in the seemingly benign context of computer vision enabled cameras on self-driving cars, which are ostensibly there to detect and prevent collision with human beings, computer vision is often eventually used for surveillance.

“The way I see it is that even benign applications like that, because data that involves humans is collected by an automatic car, even if you’re doing this for object detection, you’re gonna have images of humans, of pedestrians, or people inside the car—in practice collecting data from folks without their consent.” Soldaini said.

Soldaini also pointed to instances when this data was eventually used for surveillance, like police requesting self-driving car footage for video evidence.

And I do agree that sometimes, it’s wise to update our language to be more respectful, but I’m not convinced that in this instance it’s the smoking gun they’re portraying it as. The structures that make this technology evil here are very well understood, and they matter much more than the fairly banal language we’re using to describe the tech.

theluddite OP ,
@theluddite@lemmy.ml avatar

I post our stuff on lemmy because I’m an active user of lemmy and I like it here. I find posting here is more likely to lead to real discussions, as opposed to say Twitter, which sucks, but is where I’d be if I was blasting self-promotion. It’s not like lemmy communities drive major traffic.

Isn’t that exactly what lemmy is for? It’s what I used to love about Reddit 10 years ago, or Stumble Upon, or Digg, or any of the even older internet aggregators and forums: People would put their small, independent stuff on it. It’s what got me into the internet. I used to go on forums and aggregators to read interesting stuff, or see cool projects, or find weird webcomics, or play strange niche web games, or be traumatized by fucked up memes. Now the entire internet is just “5 big websites, each consisting of pics from the other 4” or whatever the quip is, and it’s fucking boring.

So yes, I and a few others are theluddite.org. It’s an independent site written by leftists working in tech and academia, mostly aimed at other people in tech and academia, but also for everyone. It’s not like I’m hiding it; it literally says so in my bio. We are not professional opinion-havers, unlike “mainstream” sources; I personally write code for a living every day, which is something that surprisingly few tech commentators have ever done. That makes it possible for me to write about major topics discussed in the media, like google’s ad monopoly,, in a firsthand way that doesn’t really exist elsewhere, even on topics as well trodden as that one.

And yes, we post our stuff on the fediverse, because the fediverse rules. It is how we think the internet should be. We are also self-hosted, publish an RSS feed, don’t run any ads or tracking (and often write about how bad those things are for the internet) because that’s also how we think the internet is supposed to work.

California bill to have human drivers ride in autonomous trucks is vetoed by governor ( apnews.com )

SACRAMENTO, Calif. (AP) — California Gov. Gavin Newsom has vetoed a bill to require human drivers on board self-driving trucks, a measure that union leaders and truck drivers said would save hundreds of thousands of jobs in the state.

theluddite ,
@theluddite@lemmy.ml avatar

There are two issues. First, self-driving cars just aren’t very good (yet?). Second, it will make millions of people’s jobs obsolete, and that should be a good thing, but it’s a bad thing, because we’ve structured our society such that it’s a bad thing if you lose your job. It’d be cool as hell if it were a good thing for the people who don’t have to work anymore, and we should structure our society that way instead.

theluddite ,
@theluddite@lemmy.ml avatar

I’m not sure if that article is just bad or playing some sort of 4D chess such that it sounds AI written to prove its point.

Either way, for a dive into a closely related topic, one that is obviously written by an actual human, I humbly submit my own case study on how Googles ad monopoly is directly responsible for ruining the Internet. I posted it here a week ago or so, but here it is in case you missed it and this post left you wanting.

theluddite ,
@theluddite@lemmy.ml avatar

Yeah but I can tell you if something is a crosswalk

theluddite OP ,
@theluddite@lemmy.ml avatar

Yeah absolutely. The Luddite had a guest write in and suggest that if anxiety is the self turned inwards,nthe internet is going to be full of increasingly anxious LLMs in a few years. I really liked that way of putting it.

theluddite ,
@theluddite@lemmy.ml avatar

Past automation technologies had the most effect on low-skilled workers. But with generative AI, the more educated and highly skilled workers who previously were immune to automation are vulnerable. According to the International Labor Organization, there are between 644 and 997 million knowledge workers globally, between 20% and 30% of total global employment. In the US, the knowledge-worker class is estimated to be nearly 100 million workers, one out of three Americans. A broad spectrum of occupations — marketing and sales, software engineering, research and development, accounting, financial advising, and writing, to name a few — is at risk of being automated away or evolving.

I’d take that bet, even at outrageous odds. I’ve now won over 700 dollars betting against self-driving cars with people in the tech world, and another couple hundred against crypto. Some of that even came from my former boss. I think I’ve won over a grand betting against tech hype in the last 4-5 years.

Business Insider, in the unlikely event that you read this, DM me. Let’s make a bet.

theluddite ,
@theluddite@lemmy.ml avatar

We’d have to hammer out the exact numbers, but I’d bet against that quote I pasted claiming that marketing, sales, software, and R&D are going to be automated away.

theluddite ,
@theluddite@lemmy.ml avatar

Yeah, I think this is more likely. Our jobs will just become increasingly joyless and miserable.

theluddite ,
@theluddite@lemmy.ml avatar

The real problem with LLM coding, in my opinion, is something much more fundamental than whether it can code correctly or not. One of the biggest problems coding faces right now is code bloat. In my 15 years writing code, I write so much less code now than when I started, and spend so much more time bolting together existing libraries, dealing with CI/CD bullshit, and all the other hair that software projects has started to grow.

The amount of code is exploding. Nowadays, every website uses ReactJS. Every single tiny website loads god knows how many libraries. Just the other day, I forked and built an open source project that had a simple web front end (a list view, some forms – basic shit), and after building it, npm informed me that it had over a dozen critical vulnerabilities, and dozens more of high severity. I think the total was something like 70?

All code now has to be written at least once. With ChatGPT, it doesn’t even need to be written once! We can generate arbitrary amounts of code all the time whenever we want! We’re going to have so much fucking code, and we have absolutely no idea how to deal with that.

theluddite ,
@theluddite@lemmy.ml avatar

Yes I agree. I meant the fundamental problem with the idea of LLMs doing more and more of our code, even if they get quite good.

theluddite ,
@theluddite@lemmy.ml avatar

The US 2020 Facebook and Instagram Election Study is a joint collaboration between a group of independent external academics from several institutions and Meta, the parent company of Facebook and Instagram

🫠

They investigated themselves and found no wrongdoing.

theluddite ,
@theluddite@lemmy.ml avatar

I developed something like this, so maybe I can answer. It was a browser extension that let people bypass the old twitter login wall. It had many thousands of users until Twitter started walling themselves off this summer.

I was inspired to make it in the most American way possible – someone I know was in a school that got locked down due to a shooter threat (ended up being a false alarm). The police and news agencies were live-tweeting the updates, and their partner didn’t have a twitter and couldn’t read them without making a fucking account that very moment, wondering if their partner was even alive. I directed them to nitter, but they’re not very into tech, and replacing the URL was just intimidating for them at the moment.

I found the whole experience so grotesque that that very evening I made an extension that lets you press a button to dismiss the login modal and keep scrolling (just a few css changes, or about 30 lines of code).

My two cents: Though I don’t personally use it, the fact is Twitter does have a lot of valuable stuff on it. Same goes for other large platforms – google results are now worthless without adding “reddit” to the search, for example. These companies are bad, but there’s so, so many things to care about, and people can’t care about all of them. Tactically, that makes consumer-driven change very difficult.

I’m not sure what kind of organizing we need to start doing to take back the internet from these big platforms, but whatever it is, I think it has to reckon with our past mistake of giving a few companies ownership of most of the internet, which means it has to go beyond just stopping to use them. These few platforms have the last 10 years of the internet currently walled-off, and they plan on charging rent on that forever. That’s shitty. We should try to stop them from doing that, if we can.

theluddite , (edited )
@theluddite@lemmy.ml avatar

I get the point they’re making, and I agree with most of the piece, but I’m not sure I’d frame it as Musk’s “mistakes,” because he literally won the game. He became the richest person on earth. By our society’s standards, that’s like the very definition of success.

Our economy is like quidditch. There are all these rules for complicated gameplay, but it doesn’t actually matter, because catching the snitch is the entire game. Musk is very, very bad at all the parts of the economy except for being a charlatan and a liar, which is capitalism’s version of the seeker. Somehow, he’s very good at that, and so he wins, even though he has literally no idea how to do anything else.

edit: fix typo!

edit2: since this struck a chord, here’s my theory of Elon Musk. Tl;dr: I think his success comes from offering magical technical solutions to our political and social problems, allowing us to continue living an untenable status quo.

theluddite ,
@theluddite@lemmy.ml avatar

Haha thank you. Tbh I’m not much of a Harry Potter fan, so I’m not really sure where that came from.

theluddite ,
@theluddite@lemmy.ml avatar

I literally have no idea what the rules are so any further meaning is purely a happy coincidence for which I can’t take credit.

theluddite ,
@theluddite@lemmy.ml avatar

Well, that was like 2 paragraphs and didn’t really make much of a point.

If that left you wanting, I wrote a much longer thing about this exact topic: The Crucible of Mediocrity: Lessons from the Physical World for the AI-Generated Internet

theluddite ,
@theluddite@lemmy.ml avatar

Humans are capable of assessing and addressing the obstruction; meanwhile these cars are permanently disabled without outside assistance.

theluddite ,
@theluddite@lemmy.ml avatar

It will never cease to amaze me how obsessed with and addicted to Twitter these journalists are. When Elmo trolled NPR by labeling them “state-affiliated media,” NPR itself ran indignant and breathless coverage of it for days – they just couldn’t help themselves.

theluddite , (edited )
@theluddite@lemmy.ml avatar

Extremely based.

Waymo was less enthusiastic about the practice. A spokesperson said that the cone protest reflects a lack of understanding of how autonomous vehicles work and is “vandalism and encourages unsafe and disrespectful behavior on our roadways.” Waymo says it will call the police on anyone caught interfering with its fleet of robotaxis.

You can tell the cops work for capital because Uber has made a fortune operating illegal taxis throughout the entire country and cops have never done a goddamn thing about it, but put one fucking cone on a car and Waymo feels confident the cops would use violence to stop it from happening again.

If Waymo gets its way, the roads are just going to be fully of buggy, barely-functioning autonomous cars, and every time they hit a pedestrian, the cops will arrest the pedestrian for being “disrespectful.”

edit: the more I think about it, the funnier it is. Waymo is supposedly “testing” their technology. This is a fundamental misunderstanding of how testing works. If your car can’t handle real-world conditions, you don’t get to call the cops on the real-world conditions. Putting a cone on the hood of the car is actually a great example of the kinds of weird, one-off things that happen to drivers all the time, often called the “pogo stick” problem. A serious engineering organization would realize that, realize how good humans would be at responding to this anomalous situation, and take it for the humbling experience it should be.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines