'AI Can't Think' (theverge.com) 289
In an essay published in The Verge, Benjamin Riley argues that today's AI boom is built on a fundamental misunderstanding: language modeling is not the same as intelligence. "The problem is that according to current neuroscience, human thinking is largely independent of human language -- and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own," writes Riley. A user shares: The article goes on to point out that we use language to communicate. We use it to create metaphors to describe our reasoning. That people who have lost their language ability can still show reasoning. That human beings create knowledge when they become dissatisfied with the current metaphor. Einstein's theory of relativity was not based on scientific research. He developed it as thought experiment because he was dissatisfied with the existing metaphor. It quotes someone who said, "common sense is a collection of dead metaphors." And that AI, at best, can rearrange those dead metaphors in interesting ways. But it will never be dissatisfied with the data it has or an existing metaphor.
A different critique (PDF) has pointed out that even as a language model AI is flawed by its reliance on the internet. The languages used on the internet are unrepresentative of the languages in the world. And other languages contain unique descriptions/metaphors that are not found on the internet. My metaphor for what was discussed was the descriptions of the kinds of snow that exist in Inuit languages that describe qualities nowhere found in European languages. If those metaphors aren't found on the internet, AI will never be able create them.
This does not mean that AI isn't useful. But it is not remotely human intelligence. That is just a poor metaphor. We need a better one. Benjamin Riley is the founder of Cognitive Resonance, a new venture to improve understanding of human cognition and generative AI.
A different critique (PDF) has pointed out that even as a language model AI is flawed by its reliance on the internet. The languages used on the internet are unrepresentative of the languages in the world. And other languages contain unique descriptions/metaphors that are not found on the internet. My metaphor for what was discussed was the descriptions of the kinds of snow that exist in Inuit languages that describe qualities nowhere found in European languages. If those metaphors aren't found on the internet, AI will never be able create them.
This does not mean that AI isn't useful. But it is not remotely human intelligence. That is just a poor metaphor. We need a better one. Benjamin Riley is the founder of Cognitive Resonance, a new venture to improve understanding of human cognition and generative AI.
PR article (Score:3, Informative)
This is a PR "thought leadership" BS article by Benjamin Riley, Cognitive Resonance, who "provides direct consulting support to organizations to improve understanding of how generative AI works."
This doesn't mean they're wrong but it's probably nothing terribly original (there is a reason why it's not on openreview.net as a submission into one of the relevant AI conferences).
Re:PR article (Score:5, Insightful)
And yet, he is correct. AI is based on scraping the internet. Even if it were capable of actual intelligence, anything based on the internet is based mostly on lies, misunderstanding and willful ignorance.
Re: (Score:2)
For people, the internet is one source of information. For AI training, it is the only source.
If you don't get why that difference matters, go ask Mummy for cookie and some milk, it's past your bedtime and you have school tomorrow.
Re:PR article (Score:4)
Sure do [pnas.org] :) I can provide more if you want, but start there, as it's a good read. Indeed, blind people are much better at understanding the consequences of colours than they are at knowing what colours things are..
Re:PR article (Score:4, Insightful)
A person knows what "hot" means, because it has touched a hot surface during its lifetime at least once and felt the pain. A person knows how a speed bump affects the car ride, and how lemon tastes. A person knows which shape fits into which hole, because as a child, it has played the game.
Persons learn all the time by formulating hypotheses about the world and then experience how it works out.
AI totally misses this feedback. Or as my father uses to say: AI talks about color like a blind person.
Re: (Score:3)
I had a person yelling at me online this morning because I had the gall to point out that the only way vaccines could cause autism would be using time travel (your born with autism, clearly something that happens to you after you are born can't cause something that happened to you before you without a time machine of some sort), and it struck me that actually the internet IS how a lot of people are "learning" and its m
Re: (Score:3)
What in hell GPT-generated word salad did I just plow through?
> It is a result of the thought processes that create it. To create language
LLMs are not thinking. Nor are they creating language.
> You cannot build a LLM from a Markov model
Really? 'cause I'm looking at research papers on Arxiv right now looking at the equivalences in their methodologies. Zekri, Odonnat, Benechehab, Bleistein, Boullé and Redko, last revised Feb 2025.
> If you could store one state transition probability per unit of
Re:PR article (Score:5, Insightful)
The article rightly points out that marketing of LLMs has the tech moving to achieve "AGI" and then "super" intelligence. The sales pitch for further investments throughout this year is built of these promises.
LLMs are doomed to fail at ever being intelligence at all. Yet investments as predicated going fully intelligent. That's a bubble! And a big one!
Iâ(TM)m probably speaking for a lot of people (Score:3)
Really? (Score:3, Funny)
Posted by BeauHD on Wednesday November 26, 2025 @11:40AM from the language-doesn't-equal-intelligence dept
Don't you mean "from the well duh! dept"?
Re: Really? (Score:2)
Half of the human population doesn't think either, they just echo their favorite chamber.
Re: (Score:2)
It's so frustrating.
Re: (Score:2)
Half of the human population doesn't think either, they just echo their favorite chamber.
Half of the human population doesn't think either, they just echo their favorite party.
TFTFY
Re: Really? (Score:4, Insightful)
Funny, but the entire human population spends most of their time not "thinking."
From coordinating complex movements like walking through routines like driving to work to, yes, knee jerk reactions to most things, most of what our brains do is subconscious. Only the weird justifies the effort of actual executive control. Whatever it is that we call "conscious thought" is even rarer.
Re:Really? (Score:4, Interesting)
There are a whole lot of people, some of whom frequently comment on Slashdot, who apparently think AI is actually becoming intelligent, and will soon replace all human thinking, and especially, jobs that require thinking. You don't have to read many posts to run across these guys!
Re:Really? (Score:4, Informative)
Binary systems built on silicon are fundamentally different than human biology. A giant computer system that uses a megawatt of power to assemble a coherent sentence, which the computer does not understand, is nothing like a human. You claim we don't know how we work and at the same time you claim to have reinvented that which you don't understand. Which is it, you have the brain figured out and built a software version or you have some other shit akin to a magic trick (that like a magic trick fails sometimes)?
You being impressed with parlor tricks does not make software intelligent.
Re:Really? (Score:4, Informative)
This should be prety simple to explain using the hangman example. Ask an LLM to play a game of hangman. It will agree, and as it "knows" the rules, it will claim to pick a word and make you guess letters. I just tried this on GPT-5 and it chose a five letter word. I made a bunch of random guesses and some of them were correct, some of them incorrect (so it's not just accepting all guesses as valid), although it didn't penalize me for the wrongly guessed letters. Eventually, of the five letters I had revelead the 4 last ones and they were "ailm". Since I couldn't figure out what word it is, I guessed random letters until I said W and it told me I was correct and that the word was "wailm". No, that's not a word, and I asked the model if it thinks it is a word, to which it replied:
(emphasis mine)
So it screwed up, in more ways than one actually, not just with the wod. The whole point of a game of hangman is that you're supposed to have a limited amount of guesses for the letters, but it gave me 3 guesses for the whole word, and an unlimited amount of guesses for letters, and it admited to improvising a word at random. So in reality, it has no idea of how the rules work or how to actually play the game, but still claims it can.
And it doesn't end there. It then suggested that it can setup a new round with a proper word list so that (and I'm quoting the model here): "so the solution is guaranteed to be real?"
I said yes. This time it accepted all of my guesses as instantly correct, forming the 6 letter word "Sairim", which is also not a proper English word, quothe the LLM:
After I said yes, it gave me another 6 letter word to guess but again accepted all of my guesses as instantly correct, I guessed first A, then S, then P, then E, and then R and each time it congratulated me on being correct. filling out the word as to be "Sapper". Yeah, on 3rd try, it actually landed on a proper english word, but it wasn't actually playing the game in any real sense, because it's clear it didn't choose any word in advance for me to guess out (because it can't), but simply chose the lenght of 6 letters and then filled it out with my guesses to form any valid english word, because that's the best it can do.
This is all due to the way its memory works, and there are articles out there you can look up that go into detail about why it is this way. But the point is this: while an LLM will probably be able to give you a completely correct explanation of the rules of hangman, it cannot, due to it's technical limitations, understand those rul
Re: (Score:2)
The problem, of course, is there are non-anthropocentric definitions, and it m aeets them just fine.
Intelligence does not include sound judgement- it includes judgement, period.
If it only included sound judgement, you'd be declaring that a large body of humans lack intelligence, to which I say- you're one of them.
Re:Really? (Score:5, Informative)
We may not fully understand how humans think, but we do know how LLMs work. LLMs are essentially a sophisticated pattern recognition algorithm. Based on their training, they compose sequences of tokens that approximate what would be expected in response to a prompt.
AI is to intelligence, as a movie is to motion. When watching a movie, there is a very convincing appearance of motion, but in fact, nothing on the screen is actually moving. It can be so convincing that viewers using 3D glasses might instinctively recoil when an object appears to fly towards them. But there is no actual motion. The characters have no intent, though humans assign intent to what the "characters" are saying and doing. The point is, it's an illusion. And in the same way, AI is an illusion, a fancy (and very useful) parlor trick.
Re: (Score:2)
LLMs are essentially a sophisticated pattern recognition algorithm.
No, they're not.
The fact is, we do not know "how" they work except at the very base level.
We use recursive descent to move weights in a very large collection of MLPs with a self-attention mechanism, and they're able to produce text.
Beyond that, we have to evaluate their behavior empirically.
Based on their training, they compose sequences of tokens that approximate what would be expected in response to a prompt.
This is correct, but misleadingly limited.
Based on your training, you compose words that would be expected in response to a prompt.
Models generalize. It's what's in the middle of the prompt and the answer that matt
Re: (Score:3)
The core technology takes input tokens and then provides a list of the next probably token and their respective calculated p
Re: (Score:2)
why do you keep using human sounding terms like self-attention?
Why do you think every word that applies to a human is "human sounding"?
Do dogs have attention?
Do they become confused?
These are not "human sounding" words- they are words describing the behavior of something that considers.
you keep arguing for non-human thought while using human like terms.
You keep trying to redefine words to be anthropocentric.
make up your mind, is it human like or not?
Not remotely. Neither is the word attention limited to them, or the word intelligence.
if not then stop using human like terms
The cascade of firing neurons that occurs when your attention shifts can be called attention, so I think we're just fine here.
you are deliberately mixing terms and then claiming other keep applying human qualities to things, your bullshit is evident
I'm deliberately
Re: (Score:2)
That is an interesting analysis.
Mod up!
Re: (Score:2)
Re: (Score:2)
Simple- the weather can get you wet. The simulation can't.
But what if I place you in a box, and let the simulation pour water on you, or blow air in your face? Is it real then?
Words- are they real, or are they artificial? Either way- an LLM produces them- they are not simulated. They are real.
If you and an LLM produce the same words, are you real, and it fake?
Are its words fake, but yours real?
You're quick to try to claim that pe
Re: (Score:2)
And you claim to know how LLM's think?
The word intelligent comes from the Latin word for understanding. LLM's don't understand the answers they provide to prompts.
Re: (Score:2)
interpret or view (something) in a particular way.
I know very well how an LLM works, which is why I can tell you that they don't provide answers to prompts.
The prompt is part of the same multi-billion term calculation that their knowledge is. That's what self-attention is.
I think you've just demonstrated that an LLM understands more than you do.
Wrong Name (Score:5, Insightful)
It's almost as if we shouldn't have included "intelligence" in the actual fucking name. But once again our language has been co-opted by marketing BS and now here we are trying to set the record straight so people aren't confused or deceived.
Re:Wrong Name (Score:5, Funny)
Someone in /. came up with "Augmented Idiocy."
I like it.
A lot.
Re: (Score:2)
Re: (Score:3)
Re:Wrong Name (Score:5, Insightful)
More that we missed artifical in the name.
Artificial anything is never actual the thing. It's close enough to fool some people and far enough appart to gross other people out. Artificial turf, sweatners, Vanilla flavoring, coffee creamer, plants, etc.
AI is about as intelligent as artifical turf is grass.
Re: Wrong Name (Score:2)
Re: (Score:2)
I feel like that would be better than actual incompetence.
Re: (Score:2)
That was the original definition. Artificial Intelligence meant fake intelligence. It was a system that mimicked what an intelligent creature would do. Somehow artificial's definition slowly morphed into "man-made" and then fiction pushed AI into being sentient robots/computers. Originally, a sentient computer was specifically not AI.
Re:Wrong Name (Score:4, Informative)
The definition of artificial didn't "slowly morph" into man-made: that's the original definition. Specifically it's the output of an art-maker (Latin artifex, genitive artificis).
Re: (Score:2)
People aren't confused by marketing BS. They're confused by their own stupidity.
Re:Wrong Name (Score:5, Insightful)
But yet marketing spin works.
We were being sold on LMMs achieving "AGI" and more recently moving even further to "super" intelligence. Investments are based on those promises.
But it isn't happening. That's a bubble ripe to burst.
Re: (Score:2)
But it isn't happening. That's a bubble ripe to burst.
Oh, don’t worry. It’s only 17 times bigger than the dot com bubble and 4 times the subprime mortgage crisis [marketwatch.com]. I’m sure we’ll be just fine. As long as we don’t need money or an economy.
Re: (Score:3)
It's almost as if we shouldn't have included "intelligence" in the actual fucking name.
We didn't. The media and the PR departments did. In the tech and academia worlds that seriously work with it, the terms are LLMs, machine learning, etc. - the actual terms describing what the thing does. "AI" is the marketing term used by marketing people. You know, the people who professionally lie about everything in order to sell things.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
There is no language parser in an LLM- you're just.... well, fucking wrong.
So fucking wrong, it almost feels wrong to call you fucking stupid- because you had to put thought into coming up with that completely wrong fucking thing.
You're just sad all around.
What is thinking? (Score:5, Interesting)
Re:What is thinking? (Score:5, Insightful)
As much as I agree with the statement that contemporary LLMs certainly differ a lot from what we experience as "thinking" from other human beings, the problem with this line of argument remains that there is no consensus on what exactly manifests "thinking",
The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit.
It is generally agreed that chocolate bars do not think. Rocks do not think. Pocket calculators do not think. We know what thinking is not, even if we can't define it fully.
Re: (Score:2)
The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit. It is generally agreed that chocolate bars do not think. Rocks do not think. Pocket calculators do not think. We know what thinking is not, even if we can't define it fully.
Present questions to and corresponding responses from contemporary LLMs to random people on the street, and ask them if they think that generating these responses required thinking. You will find that a vast majority of people will answer "yes" to this, even more so if they are not told the responses were generated by a computer. You and I may know how to spot the hints where LLM generated responses differ from what a human would typically respond with, but that does not matter: If you want to educate peopl
Comment removed (Score:5, Insightful)
Re: (Score:2)
So desperate to think your brain is magical, aren't you?
Re: (Score:2)
We've not narrowed it down nearly enough to determine which portions of LLM behavior are and are not thinking.
Re: (Score:2)
I think we have a breakthrough. It is intelligent if phantomfive says it is intelligent. Phantomfive has declared chocolate bars are NOT intelligent. Case closed
Re: (Score:2)
None of your examples are examples of "not thinking." They're examples of things that you think don't think.
The problem with that is it's entirely useless for extrapolating, as much as your prejudice would like you to think the opposite. It's also generally agreed that rocks don't do arithmetic, but if you arrange them in just the right way they're actually awfully good at it.
Re: What is thinking? (Score:2)
I may have "put my finger" on another flaw.. it doesn't understand the nuance of metaphors used in context... also not good at understanding irony, which we use all the time colloquially without much effort, joking around for instance
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit.
Complete bullshit.
As far as technical definitions can go, an LLM thinks. We have to evoke the philosophical and inject the subjective experience of the human mind into the matter in order to preclude it, while also killing enough brain cells to realize that the argument falls apart if we're to consider anything but ourselves as intelligent. But then again- maybe that's your goal.
Re:What is thinking? (Score:4, Insightful)
Failing to accept that marketing is being used to deceive and manipulate, (starting with Sam Altman), and allowing LLMs to have things like 'reasoning' in their model name is a problem. No different than Musk naming his software 'Full Self Driving' when it clearly isn't.
We don't have to all agree on exactly what 'thinking' is to see the lunacy of what is happening in these tech spaces.
Re: (Score:2)
people who don't understand technology and can be deceived into thinking that LLMs really are a magic box, and will not question it's outputs.
We both certainly agree that this is a huge problem with how LLMs are marketed today. I'm just proposing to not use the claim "AI can't think" as an argument towards those "who don't understand technology", because it will not be a convincing argument to them.
Failing to accept that marketing is being used to deceive and manipulate, (starting with Sam Altman), and allowing LLMs to have things like 'reasoning' in their model name is a problem. No different than Musk naming his software 'Full Self Driving' when it clearly isn't.
I think there is a big difference here: A deceiving marketing name like "Full Self Driving" evokes a pretty precise expectation of what that thing supposedly does (but does not) in everyone - and it also is pretty easy to precisely define what "Full Se
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
A goodish portion of medicine is applying an algorithm to a set of circumstance. A large potion of the critical thinking has already been done for you. You just need to isolate which algorithm applies when.
The very best doctors (from a very, very good doctor), are interlocutors, teasing out what isn't obvious from what the patient is presenting an piecing out a narrative of what makes sense.
The critical thinking is much after.
Re: (Score:2)
I never have mod points when I need them. Mod parent up. It is unlikely that LLMs can reach that mystical, metaphysical, and nebulous fish called AGI. The article claims only that there are tasks we associate with reasoning or intelligence that do not need language. I can agree with that.
Re: (Score:2)
Not "fish". Damn phone. Should read "thing"
perhaps correct, but a load of bullshit (Score:2)
"The problem is that according to current neuroscience, human thinking is largely independent of human language -- and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own..."
Complete bullshit:
"according to current neuroscience, human thinking is largely independent of human language"
False, but so what? LLMs are "largely independent of human language" as well.
"...and we have little reason to believe ever more sophis
Re: (Score:2)
At least they work reasonably well as search engines.
Re: (Score:2)
They are superior to search engines. I'm on board with that. I never use Google Search or Duck Duck Go anymore. I think the frontier models have hugely advanced the practice of information retrieval.
Re: (Score:2)
Re: (Score:2)
With enough language ingested, you get the patterns behind the language- the knowledge.
That is why LLMs can communicate in a completely invented (within this context) language with ease.
You clearly have no fucking idea what you're talking about here- why the hell are you chucking your vomit all over this thread?
So what ? Because you think human think ? (Score:5, Funny)
Woman at a rally: Governor, every thinking person will be voting for you.
Adlai Stevenson: Madam, that's not enough. I need a majority.
Re: (Score:2)
Take your pick, you're really talking about PR spin, FUD, brain washing. Everyone loves to say everyone else is a sucker. We all have our filters. It doesn't say anything about intelligence. More about picking sides in a fight.
Re: (Score:3)
This is surprisingly on point. TFA ends with this:
Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.
and it may be true that the LLMs can't make paradigm shifting breakthroughs. But then ... how many people can? 0.1%? 1%? MAYBE 10%.
and the rest of people become no more ECONOMICALLY useful/competitive/viable than pets. Then what do we do ?
Which is to say that TFA may be right that LLMs aren't going to be AGI no matter what.... but they may yet totally upend (and possibly destroy) society.
The Funniest Part... (Score:2)
My favorite is when laymen see the word "intelligence" and think that we're talking about cognition.
We're not, and rarely have been. Diatribes like this one use language so subjectively, that it's not really even clear what they mean by "thinking" in the first place, or whether machines can or can't do it. If by "thinking" they mean "reasoning" then they are wrong. Reasoning has a definition. The stochastic parrot crowd was proven wrong again by emergent structures, and the machine does do it, or at least..
Re: (Score:2)
If reasoning has an operational definition, it is: "Reasoning is whatever a machine cannot yet do". Or "The definition of intelligence is whatever a machine cannot yet do". That definition has held since the beginning of AI research.
Re: (Score:2)
It's maybe been useful motivation. The problem is, it's essentially the same definition as that of "the soul."
Re: (Score:2)
Re: (Score:2)
Use language subjectively? Lol.
Reasoning? https://www.wordnik.com/words/... [wordnik.com]
Quite a few definitions.
Emergent structures don't prove the stochastic parrot metaphor wrong. That argument shows a misunderstanding ofthe stochastic parrot argument. It's like the arguments against the Chinese Room. People who make these arguments are blinded by their own lack of comprehension.
Yes, people use language differently from you on a regular basis. That doesn't make their usage wrong or yours right.
It doesn't matter whether or not it can think... (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You'd be surprised.
Beyond the nuts and bolts of how to do a thing, there is a fair bit of nuance and institutional knowledge that goes into any job, that isn't apparent from a set of directives.
Sometimes it takes the form of best practices. Sometimes it is knowing what wheel to grease to get something done.
Individually, they may not amount to much, but in totality they make the difference between something running smoothly and pulling your hair out.
And even in the face of this context matters, which is why
Re: (Score:2)
Re: (Score:2)
What I've been telling colleagues... (Score:3)
AI = "Amalgamation of Information"
AI just uses probability calculations to amalgamate together an "average" of information on the subject. It's not smart. It doesn't think. It's not self-aware. It just is a digital hamburger grinder that churns out a paste of what gets put into its hopper.
Re: (Score:3)
I've been calling it my "Artificial Intern". You still have to assume it doesn't know what it's doing without constant instruction, however it's happy to do it over again without complaint.
Sure, whatever (Score:2)
Much of the critique seems irrelevant to AI other than LLMs, such as self-driving cars which map visual input to actions.
Wrong on Einstein (Score:2)
"He developed it as thought experiment because he was dissatisfied with the existing metaphor."
No. He was thinking about it because the flaws in the Newtonian mathematics, and the ways some were trying to adapt Planck's maths into the observations, just weren't matching up. The mathematics didn't fit the observations to the degree, the level of detail, that was now feasible given the technology of observational accuracy.
So he thought about what would FIT THE OBSERVATIONS. The data came first, the explaining
Re: (Score:2)
Dumb (Score:2)
Well, you can stop reading there. I don't necessarily agree with the thesis, but the supporting arguments seem to range from wrong to kind of dumb.
Re: (Score:2)
Perhaps drunk too?
People like absolutes and clear lines drawn. The real world is perfectly happy to not oblige.
Re: Dumb (Score:2)
Re: (Score:2)
Yes! No doubt! 100% true!
Re: (Score:2)
Re: (Score:2)
Today's fake AI has crippled search! (Score:3)
Crap (Score:2)
Re: (Score:2)
"be dissatisfied with the data it has" (Score:2)
This is not controversial (Score:3)
It's common knowledge among AI researchers
The hypemongers spin a different tale
Re: (Score:2)
The coherence is built into the machine. LLMs are, very simply put, statistical machines that output the next (statistically) most likely word/token based on the occurrences of token patterns in it's training data.
Re: (Score:2)
You can argue a simulation of something thinking isn't that thing thinking. If I write down my thoughts on paper, that paper isn't thinking.
Though I couldn't tell you where the line is between a simulation run at the same speed and with the exact same abilities as the real thing.
Re: (Score:2)
Thanks for the list of options nobody cares about.