Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:AI Hype needs money (Score 1) 104

The only question here is: What are they selling? Increased stock value?

From what I have observed in recent years, C-level people believe that their company will profit greatly from using LLM-based services that they pay other companies for. And when their stock value drops, they find out too late that any potential upside of their use of such services would also apply to any competitor, to the point where mundane SaaS-services can be vibe-coded by anyone, accepting the same lower quality standards they introduced by using LLM-generated code.

But to the question what are they selling: C-level people are selling themselves, as in "Look how modern we are!", as a sales pitch to the next board of directors, while not understanding the real consequences of what they were doing.

Comment Re:We're not restricting the technologies... (Score 4, Insightful) 81

Also, Google is about the last company I would think of as offering any outstanding technology that could not be replaced by something built domestically. It's not like their "annoy people with ads"-business would add any substantial value for the people. Nothing of value would be lost if their services became unavailable in the EU.

Comment Only a matter of time until Ads influence results (Score 5, Insightful) 70

We have seen this play out time and again: First there are just "random" Ads shown in some corner, then the Ads become annoying and distracting, then the Ads become "targeted" based on the conversation topics (and thus data leaked to the advertisers), then the advertisers demand "not being displayed around this or that topic - so the service obeys and censors "ad-unfriendly" topics in conversations, next Advertisers pay a premium for the content to present them as something positive, then they demand the content specifically advertises them, and finally the Website becomes "not profitable enough" anymore and is sold off, along with all the personal data and stored conversations, to the highest bidder, who then sells N copies of the personal data to every data broker while turning the service into a complete shit show with outrageous pricing.

And Anthropic may say "we'll never do that!" today, but they are only one CEO change and one investor call away from doing it, nevertheless.

Comment Reading of fiction does not tell much (Score 1, Troll) 73

People reading (potentially lots of) fictional books may prove that they are able to read, but that does not tell us whether (1) they have good reading comprehension and could learn from non-fictional texts, it also does not tell us whether (2) they are able to communicate actively. And it is (1) and (2) that I am most concerned with in young colleagues, with a recent addition of (3) have they outsourced thinking to LLMs? I really would not mind working with people who never read a single book, as long as they have been able and willing to read non-fiction texts (on whatever medium), comprehended them, and are able to apply and communicate what they learned from them.

A more concerning symptom than "not reading books", from my observation, is the immense popularity of few-seconds-short videos. Those videos are entirely incapable of transporting any meaningful amount of information, and my suspicion is that those consuming such videos en masse have developed severe attention span deficits.

Comment And by "most economically compelling" he means... (Score 4, Interesting) 245

... "the most compelling location where our new AI overlords are out of reach of government regulation, law enforcement or the angry mob with torches". I wonder when the first groups will consider to deliberately trigger the "Kessler syndrome", only to disable such "data centers in space".

Comment Re:I feel like that's too big (Score 3, Funny) 62

You really should do the math. 11 days, which is a still a long time. The purpose of this type of storage is for use cases where the touch rate is very long. For data that is updated/accessed once a year in very big chunks, these work.

I think he meant the time it would take to scan one such disk full of justice department evidence for cases of corruption.

Comment Re:Why Is Anthropic Crashing The Market (Score 2) 51

They certainly do not hate money, they want it funneled into their direction. And since the stock market can only make you gain money that another market participant loses, they are certainly fine with other companies' market capitalization falling.

And honestly, their value proposition is kind of enticing: "Here we offer you 'plug-ins' to replace your finance/legal/developer/etc. personnel with LLM based bots. Of course we have some cover-your-ass clause written into our offering, that you need to have all the output of those bots checked by trained professionals. And of course we all no that will not happen, at best you will keep the cheapest possible employee around who is tasked to check all the bot output, and when the shit hits the fan, because our bot got caught producing nonsense, you can fire and replace that employee as your scapegoat for not correcting the results."

Whether this offering will be worth the increasing amounts of money it will cost remains to be seen, but given that corporations like to capitalize their "development costs" over 5 years, they can reduce these costs only by 20% per year, even if they fire everybody right now. And no CEO wants to miss that opportunity to replace all those pesky white-collar workers (except the few scapegoats) with bots.

Comment If you read Anthropics product description... (Score 1) 55

... you will always find the cover-your-ass sentence "Results need to be checked by a trained professional". And of course everybody knows this is exactly what will not happen, as it would contradict the purpose of those bots to replace costs for trained professionals. So the inevitable result of using those "expert" plug-ins will be a lower quality of the results, at an (at least at first) lower price. And maybe even some Investors realize what a bleak future this foretells.

Comment Just training our AI models (Score 3, Funny) 186

They got this completely wrong: People just need to stream all those channels to train their local AI models at home. And as we all know from the wealthiest corporations in the US, getting whatever copyrighted material you can get hold of for free is all good as long as you use it for training AI models.

Comment Re:Very good for novices, but reinforces bad habit (Score 1) 53

Now...what we do not know is the long term playout of this. There was a time, I'm sure because I was there as I suspect you were as well, where higher level compilers were considered suspect. I've had to review the assembly they spit out to find floating point bugs I *knew* weren't in my code. But now? In 2026?

The mature compilers have become reasonably dependable with regards to correctness (which LLMs are far away from), the performance of compiled code is also usually ok, but people never having read assembler output of compilers still tend to program in ways that force compilers to generate machine code that is worse than what could be achieved if people were aware of what the underlying CPU can do efficiently.

But more importantly: People did not stop at using compilers, they added libraries, then "frameworks", then frameworks-based-on-frameworks, up to a situation where today a lot of software is terribly inefficient just due to its overuse of multiple abstraction layers, and also reliability has suffered from programmers no longer feeling responsible when their software crashes allegedly due to bugs in the many frameworks or 3rd-party dependencies.

And this latter scenario is more likely what will happen with LLMs: They are already used a lot, before becoming mature and reliable, they are already generating less than efficient code (just recently watched Claude initialize a huge hash table within a function, even though its content was constant between calls), and people have already started piling up additional tool layers (like with "Ralph Wiggum coding"), so the quality is going to deteriorate further.

And even if we are so optimistic to think that "AI coding tools" will become more dependable and efficient, the brain atrophy observable in people, even ones with a prior history of intellectual work, is scary fast. People tend to "forget" how to think on their own quickly, when not exercising this daily.

Comment Re:Very good for novices, but reinforces bad habit (Score 5, Insightful) 53

AI is very good for novices, people who don't know something well.

There is plenty of evidence already that novices using AI will remain novices, rather than develop advanced skills. So yes, as a "novice", you can get to some result quicker by using AI, but the result will be that of a "fool with a tool", and your next work's result won't be better, because you didn't learn anything.

Comment Leading to deskilling and unfixed bugs (Score 4, Informative) 53

It's not like the makers of LLM-based coding tools wouldn't know what the consequences will be, to cite https://www.anthropic.com/rese...

On average, participants in the AI group finished about two minutes faster, although the difference was not statistically significant. There was, however, a significant difference in test scores: the AI group averaged 50% on the quiz, compared to 67% in the hand-coding group—or the equivalent of nearly two letter grades (Cohen's d=0.738, p=0.01). The largest gap in scores between the two groups was on debugging questions, suggesting that the ability to understand when code is incorrect and why it fails may be a particular area of concern if AI impedes coding development. [...] Given time constraints and organizational pressures, junior developers or other professionals may rely on AI to complete tasks as fast as possible at the cost of skill development—and notably the ability to debug issues when something goes wrong.

This is exactly what will happen: More AI-Slop produced faster, with more bugs that never get fixed. And skilled IT personnel being replaced by unskilled prompt-monkeys, because cheaper.

Slashdot Top Deals

Five is a sufficiently close approximation to infinity. -- Robert Firth "One, two, five." -- Monty Python and the Holy Grail

Working...