Forgot your password?
typodupeerror
AI

'AI Can't Think' (theverge.com) 289

In an essay published in The Verge, Benjamin Riley argues that today's AI boom is built on a fundamental misunderstanding: language modeling is not the same as intelligence. "The problem is that according to current neuroscience, human thinking is largely independent of human language -- and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own," writes Riley. A user shares: The article goes on to point out that we use language to communicate. We use it to create metaphors to describe our reasoning. That people who have lost their language ability can still show reasoning. That human beings create knowledge when they become dissatisfied with the current metaphor. Einstein's theory of relativity was not based on scientific research. He developed it as thought experiment because he was dissatisfied with the existing metaphor. It quotes someone who said, "common sense is a collection of dead metaphors." And that AI, at best, can rearrange those dead metaphors in interesting ways. But it will never be dissatisfied with the data it has or an existing metaphor.

A different critique (PDF) has pointed out that even as a language model AI is flawed by its reliance on the internet. The languages used on the internet are unrepresentative of the languages in the world. And other languages contain unique descriptions/metaphors that are not found on the internet. My metaphor for what was discussed was the descriptions of the kinds of snow that exist in Inuit languages that describe qualities nowhere found in European languages. If those metaphors aren't found on the internet, AI will never be able create them.

This does not mean that AI isn't useful. But it is not remotely human intelligence. That is just a poor metaphor. We need a better one.
Benjamin Riley is the founder of Cognitive Resonance, a new venture to improve understanding of human cognition and generative AI.
This discussion has been archived. No new comments can be posted.

'AI Can't Think'

Comments Filter:
  • PR article (Score:3, Informative)

    by Shisha ( 145964 ) on Tuesday November 25, 2025 @06:47PM (#65817889) Homepage

    This is a PR "thought leadership" BS article by Benjamin Riley, Cognitive Resonance, who "provides direct consulting support to organizations to improve understanding of how generative AI works."

    This doesn't mean they're wrong but it's probably nothing terribly original (there is a reason why it's not on openreview.net as a submission into one of the relevant AI conferences).

    • Re:PR article (Score:5, Insightful)

      by taustin ( 171655 ) on Tuesday November 25, 2025 @07:56PM (#65818071) Homepage Journal

      And yet, he is correct. AI is based on scraping the internet. Even if it were capable of actual intelligence, anything based on the internet is based mostly on lies, misunderstanding and willful ignorance.

    • Re:PR article (Score:5, Insightful)

      by evanh ( 627108 ) on Tuesday November 25, 2025 @08:23PM (#65818121)

      The article rightly points out that marketing of LLMs has the tech moving to achieve "AGI" and then "super" intelligence. The sales pitch for further investments throughout this year is built of these promises.

      LLMs are doomed to fail at ever being intelligence at all. Yet investments as predicated going fully intelligent. That's a bubble! And a big one!

  • by liqu1d ( 4349325 ) on Tuesday November 25, 2025 @06:53PM (#65817903)
    Who? But I agree with the points outside of the market babble.
  • Really? (Score:3, Funny)

    by Mogster ( 459037 ) on Tuesday November 25, 2025 @06:54PM (#65817905)

    Posted by BeauHD on Wednesday November 26, 2025 @11:40AM from the language-doesn't-equal-intelligence dept

    Don't you mean "from the well duh! dept"?

    • Half of the human population doesn't think either, they just echo their favorite chamber.

      • by cshark ( 673578 )

        It's so frustrating.

      • Half of the human population doesn't think either, they just echo their favorite chamber.

        Half of the human population doesn't think either, they just echo their favorite party.

        TFTFY

      • Re: Really? (Score:4, Insightful)

        by ceoyoyo ( 59147 ) on Tuesday November 25, 2025 @09:36PM (#65818201)

        Funny, but the entire human population spends most of their time not "thinking."

        From coordinating complex movements like walking through routines like driving to work to, yes, knee jerk reactions to most things, most of what our brains do is subconscious. Only the weird justifies the effort of actual executive control. Whatever it is that we call "conscious thought" is even rarer.

    • Re:Really? (Score:4, Interesting)

      by Tony Isaac ( 1301187 ) on Tuesday November 25, 2025 @11:27PM (#65818377) Homepage

      There are a whole lot of people, some of whom frequently comment on Slashdot, who apparently think AI is actually becoming intelligent, and will soon replace all human thinking, and especially, jobs that require thinking. You don't have to read many posts to run across these guys!

  • Wrong Name (Score:5, Insightful)

    by lazarus ( 2879 ) on Tuesday November 25, 2025 @06:57PM (#65817913) Journal

    It's almost as if we shouldn't have included "intelligence" in the actual fucking name. But once again our language has been co-opted by marketing BS and now here we are trying to set the record straight so people aren't confused or deceived.

    • by TigerPlish ( 174064 ) on Tuesday November 25, 2025 @07:13PM (#65817965)

      Someone in /. came up with "Augmented Idiocy."

      I like it.

      A lot.

    • Re:Wrong Name (Score:5, Insightful)

      by FictionPimp ( 712802 ) on Tuesday November 25, 2025 @07:15PM (#65817971) Homepage

      More that we missed artifical in the name.

      Artificial anything is never actual the thing. It's close enough to fool some people and far enough appart to gross other people out. Artificial turf, sweatners, Vanilla flavoring, coffee creamer, plants, etc.

      AI is about as intelligent as artifical turf is grass.

      • Artificial incompetence?
      • That was the original definition. Artificial Intelligence meant fake intelligence. It was a system that mimicked what an intelligent creature would do. Somehow artificial's definition slowly morphed into "man-made" and then fiction pushed AI into being sentient robots/computers. Originally, a sentient computer was specifically not AI.

    • by taustin ( 171655 )

      People aren't confused by marketing BS. They're confused by their own stupidity.

    • by Tom ( 822 )

      It's almost as if we shouldn't have included "intelligence" in the actual fucking name.

      We didn't. The media and the PR departments did. In the tech and academia worlds that seriously work with it, the terms are LLMs, machine learning, etc. - the actual terms describing what the thing does. "AI" is the marketing term used by marketing people. You know, the people who professionally lie about everything in order to sell things.

  • What is thinking? (Score:5, Interesting)

    by ffkom ( 3519199 ) on Tuesday November 25, 2025 @06:57PM (#65817915)
    As much as I agree with the statement that contemporary LLMs certainly differ a lot from what we experience as "thinking" from other human beings, the problem with this line of argument remains that there is no consensus on what exactly manifests "thinking", and so it is unconvincing to claim that LLMs "cannot think". It is like claiming "chocolate bars do not contain dark matter!" while not being able to tell what dark matter actually is. Also, people would probably not claim that "pocket calculators cannot calculate", even though they perform calculations in a very different way than humans do. So if, at some point, some AI produces the same results as human beings in whatever task is agreed upon to "require thinking", then it does not matter whether AI uses the same mechanisms to come to the same results.
    • by phantomfive ( 622387 ) on Tuesday November 25, 2025 @07:19PM (#65817981) Journal

      As much as I agree with the statement that contemporary LLMs certainly differ a lot from what we experience as "thinking" from other human beings, the problem with this line of argument remains that there is no consensus on what exactly manifests "thinking",

      The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit.

      It is generally agreed that chocolate bars do not think. Rocks do not think. Pocket calculators do not think. We know what thinking is not, even if we can't define it fully.

      • by ffkom ( 3519199 )

        The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit. It is generally agreed that chocolate bars do not think. Rocks do not think. Pocket calculators do not think. We know what thinking is not, even if we can't define it fully.

        Present questions to and corresponding responses from contemporary LLMs to random people on the street, and ask them if they think that generating these responses required thinking. You will find that a vast majority of people will answer "yes" to this, even more so if they are not told the responses were generated by a computer. You and I may know how to spot the hints where LLM generated responses differ from what a human would typically respond with, but that does not matter: If you want to educate peopl

      • The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit.

        We've not narrowed it down nearly enough to determine which portions of LLM behavior are and are not thinking.

      • I think we have a breakthrough. It is intelligent if phantomfive says it is intelligent. Phantomfive has declared chocolate bars are NOT intelligent. Case closed

      • by ceoyoyo ( 59147 )

        None of your examples are examples of "not thinking." They're examples of things that you think don't think.

        The problem with that is it's entirely useless for extrapolating, as much as your prejudice would like you to think the opposite. It's also generally agreed that rocks don't do arithmetic, but if you arrange them in just the right way they're actually awfully good at it.

        • You put your finger on it. It can't extrapolate. That is one of the fatal flaws of LLM.

          I may have "put my finger" on another flaw.. it doesn't understand the nuance of metaphors used in context... also not good at understanding irony, which we use all the time colloquially without much effort, joking around for instance ... LLMs are weak in many areas.
        • What in the fucking hell are you talking about- every token produced by an LLM is a fucking extrapolation. They're literally a multi-billion parameter fucking extrapolation machine.
      • I see what you did there. We know that we say people can think, but most would see that you clearly don't. That was clever.
      • The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit.

        Complete bullshit.

        As far as technical definitions can go, an LLM thinks. We have to evoke the philosophical and inject the subjective experience of the human mind into the matter in order to preclude it, while also killing enough brain cells to realize that the argument falls apart if we're to consider anything but ourselves as intelligent. But then again- maybe that's your goal.

    • by silvergig ( 7651900 ) on Tuesday November 25, 2025 @07:19PM (#65817991)
      I don't agree with that. LLMs are being marketed as a way to completely replace human thinking, right down to replacing doctors and therapists - professions that most certainly require a lot of critical thinking. While I would say that that is ludicrous, it also means that we need to completely disconnect AI from anything that infers 'thinking'. Because it can't. If we don't, people will continue to be deceived, (started with C-levels that are anxious to get rid of workers), and people who don't understand technology and can be deceived into thinking that LLMs really are a magic box, and will not question it's outputs.

      Failing to accept that marketing is being used to deceive and manipulate, (starting with Sam Altman), and allowing LLMs to have things like 'reasoning' in their model name is a problem. No different than Musk naming his software 'Full Self Driving' when it clearly isn't.

      We don't have to all agree on exactly what 'thinking' is to see the lunacy of what is happening in these tech spaces.
      • by ffkom ( 3519199 )

        people who don't understand technology and can be deceived into thinking that LLMs really are a magic box, and will not question it's outputs.

        We both certainly agree that this is a huge problem with how LLMs are marketed today. I'm just proposing to not use the claim "AI can't think" as an argument towards those "who don't understand technology", because it will not be a convincing argument to them.

        Failing to accept that marketing is being used to deceive and manipulate, (starting with Sam Altman), and allowing LLMs to have things like 'reasoning' in their model name is a problem. No different than Musk naming his software 'Full Self Driving' when it clearly isn't.

        I think there is a big difference here: A deceiving marketing name like "Full Self Driving" evokes a pretty precise expectation of what that thing supposedly does (but does not) in everyone - and it also is pretty easy to precisely define what "Full Se

      • Comment removed based on user account deletion
        • A goodish portion of medicine is applying an algorithm to a set of circumstance. A large potion of the critical thinking has already been done for you. You just need to isolate which algorithm applies when.

          The very best doctors (from a very, very good doctor), are interlocutors, teasing out what isn't obvious from what the patient is presenting an piecing out a narrative of what makes sense.

          The critical thinking is much after.

    • I never have mod points when I need them. Mod parent up. It is unlikely that LLMs can reach that mystical, metaphysical, and nebulous fish called AGI. The article claims only that there are tasks we associate with reasoning or intelligence that do not need language. I can agree with that.

  • "The problem is that according to current neuroscience, human thinking is largely independent of human language -- and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own..."

    Complete bullshit:

    "according to current neuroscience, human thinking is largely independent of human language"

    False, but so what? LLMs are "largely independent of human language" as well.

    "...and we have little reason to believe ever more sophis

    • You think a Large Language Model is largely independent of human language?

      At least they work reasonably well as search engines.
      • They are superior to search engines. I'm on board with that. I never use Google Search or Duck Duck Go anymore. I think the frontier models have hugely advanced the practice of information retrieval.

      • Independent of any specific language, yes. They chunk language into tokens and convert the tokens into vectors. Language is just an interface.
      • Of course it is.

        With enough language ingested, you get the patterns behind the language- the knowledge.
        That is why LLMs can communicate in a completely invented (within this context) language with ease.

        You clearly have no fucking idea what you're talking about here- why the hell are you chucking your vomit all over this thread?
  • by Voice of satan ( 1553177 ) on Tuesday November 25, 2025 @07:08PM (#65817953)

    Woman at a rally: Governor, every thinking person will be voting for you.

    Adlai Stevenson: Madam, that's not enough. I need a majority.

    • by evanh ( 627108 )

      Take your pick, you're really talking about PR spin, FUD, brain washing. Everyone loves to say everyone else is a sucker. We all have our filters. It doesn't say anything about intelligence. More about picking sides in a fight.

    • by rta ( 559125 )

      This is surprisingly on point. TFA ends with this:

      Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

      and it may be true that the LLMs can't make paradigm shifting breakthroughs. But then ... how many people can? 0.1%? 1%? MAYBE 10%.

      and the rest of people become no more ECONOMICALLY useful/competitive/viable than pets. Then what do we do ?

      Which is to say that TFA may be right that LLMs aren't going to be AGI no matter what.... but they may yet totally upend (and possibly destroy) society.

  • My favorite is when laymen see the word "intelligence" and think that we're talking about cognition.
    We're not, and rarely have been. Diatribes like this one use language so subjectively, that it's not really even clear what they mean by "thinking" in the first place, or whether machines can or can't do it. If by "thinking" they mean "reasoning" then they are wrong. Reasoning has a definition. The stochastic parrot crowd was proven wrong again by emergent structures, and the machine does do it, or at least..

    • If reasoning has an operational definition, it is: "Reasoning is whatever a machine cannot yet do". Or "The definition of intelligence is whatever a machine cannot yet do". That definition has held since the beginning of AI research.

      • by ceoyoyo ( 59147 )

        It's maybe been useful motivation. The problem is, it's essentially the same definition as that of "the soul."

      • Your argument then is that people cannot think, since people are biological machines. Of course, you can argue that that statement is true or not depending on your definition of machine, but of course this is the same problem as trying to say something can or cannot think without having a scientifically testable definition of that word. In short, you either get that I don't need to prove you think and you can't prove any significantly complex neural network based system doesn't, or you aren't any better a
    • Use language subjectively? Lol.

      Reasoning? https://www.wordnik.com/words/... [wordnik.com]

      Quite a few definitions.

      Emergent structures don't prove the stochastic parrot metaphor wrong. That argument shows a misunderstanding ofthe stochastic parrot argument. It's like the arguments against the Chinese Room. People who make these arguments are blinded by their own lack of comprehension.

      Yes, people use language differently from you on a regular basis. That doesn't make their usage wrong or yours right.

  • We're not trying to replace Einstein with AI. We're trying to replace Carl the junior web developer with AI. It just needs to be able to do Carl's job.
    • Comment removed based on user account deletion
      • Ok I agree you mostly but... Carl's not innovating squat. He might think he is... But he's not.
        • You'd be surprised.

          Beyond the nuts and bolts of how to do a thing, there is a fair bit of nuance and institutional knowledge that goes into any job, that isn't apparent from a set of directives.

          Sometimes it takes the form of best practices. Sometimes it is knowing what wheel to grease to get something done.

          Individually, they may not amount to much, but in totality they make the difference between something running smoothly and pulling your hair out.

          And even in the face of this context matters, which is why

      • It's pretty evident that you have never had a discussion with Carl.
  • AI = "Amalgamation of Information"

    AI just uses probability calculations to amalgamate together an "average" of information on the subject. It's not smart. It doesn't think. It's not self-aware. It just is a digital hamburger grinder that churns out a paste of what gets put into its hopper.

    • by Keick ( 252453 )

      I've been calling it my "Artificial Intern". You still have to assume it doesn't know what it's doing without constant instruction, however it's happy to do it over again without complaint.

  • Show me how your insights have enabled you to create more advanced functionality, and then I'll be interested.

    Much of the critique seems irrelevant to AI other than LLMs, such as self-driving cars which map visual input to actions.

  • "He developed it as thought experiment because he was dissatisfied with the existing metaphor."

    No. He was thinking about it because the flaws in the Newtonian mathematics, and the ways some were trying to adapt Planck's maths into the observations, just weren't matching up. The mathematics didn't fit the observations to the degree, the level of detail, that was now feasible given the technology of observational accuracy.

    So he thought about what would FIT THE OBSERVATIONS. The data came first, the explaining

  • by ceoyoyo ( 59147 )

    Einstein's theory of relativity was not based on scientific research.

    Well, you can stop reading there. I don't necessarily agree with the thesis, but the supporting arguments seem to range from wrong to kind of dumb.

  • by oldgraybeard ( 2939809 ) on Tuesday November 25, 2025 @10:37PM (#65818313)
    Often the first page is the same regurgitated incorrect information over and over. Which just hides anything useful. It used to be that the correct answer was usually in the top 5. Today, the correct information doesn't even come up on the first page.
  • The authorâ(TM)s obsession with the "biological vs. artificial" distinction is the exact same logical fallacy as arguing that airplanes can't actually fly because they don't have feathers or flap their wings like birds. The output (flight) is the same; the mechanism (Bernoulli's principle vs. biomechanics) is irrelevant. This piece tries to argue that because the brain separates language and reasoning , a machine that unifies them can't be intelligent. That is peak absurdity. It is effectively the sam
    • Okay, but you have to admit it's useful to distinguish between the intelligence that AI seems to display, and that humans seem to display.
  • Seems like today's AI has no problem making up any data it needs.
  • by MpVpRb ( 1423381 ) on Wednesday November 26, 2025 @02:09AM (#65818569)

    It's common knowledge among AI researchers
    The hypemongers spin a different tale

There's nothing worse for your business than extra Santa Clauses smoking in the men's room. -- W. Bossert

Working...