Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:AI detectors remain garbage. (Score 1) 18

They clearly didn't even use a proper image generator - that's clearly the old crappy ChatGPT-builtin image generator. It's not like it's a useful figure with a few errors - the entire thing is sheer nonsense - the more you look at it, the worse it gets. And this is Figure 1 in a *paper in Nature*. Just insane.

This problem will decrease with time (here are two infographics from Gemini 3 I made just by pasting in an entire very long thread on Bluesky and asking for infographics, with only a few minor bits of touchup). Gemini successfully condensed a really huge amount of information into infographics, and the only sorts of "errors" were things like, I didn't like the title, a character or two was slightly misshapen, etc. It's to the point that you could paste in entire papers and datasets and get actually useful graphics out, in a nearly-finished or even completely-finished state. But no matter how good the models get, you'll always *have* to look at what you generate to see if it's (A) right, and (B) actually what you wanted.

Comment Re:Also, why can't ChatGPT control a robot? (Score 1) 68

There has been plenty of progress in using AI to control robotics; they use robotics-specific AIs for that, of course.

The fact that ChatGPT (or even LLMs in general) isn't particularly useful for robots shouldn't be a surprise, since robots (other than maybe C3PO) are about physical manipulation of objects, not about language generation.

Comment AI detectors remain garbage. (Score 5, Interesting) 18

At one point last week I pasted the first ~300 words or so of the King James Bible into an AI detector. It told me that over half of it was AI generated.

And seriously, considering some of the god-awful stuff passing peer review in "respectable" journals these days, like a paper in AIP Advances that claims God is a scalar field becoming a featured article, or a paper in Nature whose Figure 1 is an unusually-crappy AI image talking about "Runctitiononal Features", "Medical Fymblal", "1 Tol Line storee", etc... at the very least, getting a second opinion from an AI before approving a paper would be wise.

Comment Re: CEO sees roadblock to more profit and says let (Score 2) 59

Its only real flaw with the latest tech is the pacing, but it doesn't take much for a human to correct that. It's not at all what I'd call slop

Its objectively slop and the best models sound dead and lifeless. What *I* want is for Voice artists to still have the career they've been slaving their asses off to still exist. Your not going to get the traumatized performance of Astarion collapsing with grief after killing the vampire who enslaved him, without Neil Newbon drawing on his own experience of trauma, or even the narators snarky delivery without a voice actor whos spent her life playing D&D with absolutely diabolocial nerds informing her subtle intonations and knowing delivery. All you hear in AI performances is .......... nothing. No acting, no emotions, just a dead plagarism machine sewing together stolen performances.

It is the very definition of slop.

Comment Re:I thought we were saving the planet? (Score 1) 181

FYI, their statement about Iceland is wrong. BEV sales were:

2019: 1000
2020: 2723
2021: 3777
2022: 5850
2023: 9260
2024 (first year of the "kílómetragjald" and the loss of VAT-free purchases): 2913
2025: 5195

Does this look like the changes had no impact to anyone here? It's a simple equation: if you increase the cost advantage of EVs, you shift more people from ICEs to EVs, and if you decrease it, the opposite happens. If you add a new mileage tax, but don't add a new tax to ICE vehicles, then you're reducing the cost advantage. And Iceland's mileage tax was quite harsh.

The whole structure of it is nonsensical (they're working on improving it...), and the implementation was so damned buggy (it's among other things turned alerts on my inbox for government documents into spam, as they keep sending "kílómetragjald" notices, and you can't tell from the email (without taking the time to log in) whether it's kílómetragjald spam or something that actually matters). What I mean by the structure is that it's claimed to be about road maintenance, yet passenger cars on non-studded tyres do negligible road wear. Tax vehicles by axle weight to the fourth times mileage, make them pay for a sticker for the months they want to use studded tyres, and charge flat annual fees (scaled by vehicle cost) for non-maintenance costs. Otherwise, you're inserting severe distortion into the market - transferring money from those who aren't destroying the roads to subsidize those who are, and discouraging the people who aren't destroying the roads from driving to places they want to go (quality of life, economic stimulus, etc)

Comment Re:PR article (Score 2) 278

Sure do :) I can provide more if you want, but start there, as it's a good read. Indeed, blind people are much better at understanding the consequences of colours than they are at knowing what colours things are..

Comment Re:PR article (Score 1) 278

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 1, Insightful) 278

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Comment Re:Does it mean... (Score 2) 70

If its true, then.... well yeah.

And we'll almost certainly get a better name for it than "Phrase which confuses non scientists into thinking scientists have an unprovable theory when scientists literally called it that to indicate that actually, they really dont have a theory yet, or more concisely 'dark matter' "

Comment Re:PR article (Score 2) 278

Where, prey tell, do you think humans get the vast majority of their "knowledge" in 2025?

I had a person yelling at me online this morning because I had the gall to point out that the only way vaccines could cause autism would be using time travel (your born with autism, clearly something that happens to you after you are born can't cause something that happened to you before you without a time machine of some sort), and it struck me that actually the internet IS how a lot of people are "learning" and its making people incredibly stupid.

Comment Re:Could the AI bubble do something good? (Score 1) 54

I had a realisation a while back that it wasn''t AI research/dev per se thats driving this, its Nvidia thats driving it.

DeepSeek proved that you don't *need* the the "hyperscale" datacenters to develop good-enough AI. (Theres a lot of conspiracy theories about how DeepSeek must have had secret spooky mega sized datacenters doing all this, but they published their methods and training sets, and people have reproduced it, and it all checks out, you really can do this shit on the cheap).

And thats bad news for NVIDIA which needs for AI to expensive to justify their capital outlay, sales projections, and irrational market valuation. Hence all the circular investment. Pump money into companies with the provisio they buy a whole bunch of compute that honestly probably isn't needed if AI companies actually started thinking about efficiency instead of scale.

So we're gonna burn down the amazon for NVDA share prices. Yay 2025.

Comment Re:AI or A1? (Score 1) 101

Who's going to do the booting? Certainly not "the will of the people". If the constitution can be freely ignored, and the Army proves to be loyal, then that can be freely ignored too.

Well, it aint over till the fat lady sings. You'll know either way late next year I suspect. Then you get to find out if that second ammendment is worth shit.

The thing is though,historically its not senior brass that coups govts, its junior officers. If the senior brass wants to engage in a bunch of democracy suppression and the junior officers go "Wait up, I didnt sign up to shoot my neighbors I signed up to uphold the constiution" then the govt will discover the military intervention they expected isnt the one they get.

And I really hope that isn't why the white house is shitting anger-bricks over the senators reminding the troops that they are forbidden under the UMCJ to follow illegal orders.

Slashdot Top Deals

It is better to never have tried anything than to have tried something and failed. - motto of jerks, weenies and losers everywhere

Working...