Forgot your password?
typodupeerror
AI Books

Is AI Cannibalizing Human Intelligence? A Neuroscientist's Way to Stop It (wsj.com) 22

The AI industry is largely failing to ask a key design question, argues theoretical neuroscientist/cognitive scientist Vivienne Ming. Are their AI products building human capacity or consuming it?

In the Wall Street Journal Ming shares her experiment about which group performed best at predicting real-world events (compared to forecasters on prediction market Polymarket) — AI, human, or human-AI hybrid teams. The human groups performed poorly, relying on instinct or whatever information had come across their feeds that morning. The large AI models — ChatGPT and Gemini, in this case — performed considerably better, though still short of the market itself. But when we combined AI with humans, things got more interesting. Most hybrid teams used AI for the answer and submitted it as their own, performing no better than the AI alone. Others fed their own predictions into AI and asked it to come up with supporting evidence. These "validators" had stumbled into a classic confirmation bias-loop: the sycophancy that leads chatbots to tell you what you want to hear, even if it isn't true. They ended up performing worse than an AI working solo.

But in roughly 5% to 10% of teams, something different emerged. The AI became a sparring partner. The teams pushed back, demanding evidence and interrogating assumptions. When the AI expressed high confidence, the humans questioned it. When the humans felt strongly about an intuition, they asked the AI to come up with a counterargument... These teams reached insightful conclusions that neither a human nor a machine could have produced on its own. They were the only group to consistently rival the prediction market's accuracy. On certain questions, they even outperformed it...

We are building AI systems specifically designed to give us the answer before we feel the discomfort of not having it. What my experiment suggests is that the human qualities most likely to matter are not the feel-good ones. They're the uncomfortable ones: the capacity to be wrong in public and stay curious; to sit with a question your phone could answer in three seconds and resist the urge to reach for it. To read a confident, fluent response from an AI and ask yourself, "What's missing?" rather than default to "Great, that's done." To disagree with something that sounds authoritative and to trust your instinct enough to follow it. We don't build these capacities by avoiding discomfort. We build them by choosing it, repeatedly, in small ways: the student who struggles through a problem before checking the answer; the person who asks a follow-up question in a conversation; the reader who sits with a difficult idea long enough for it to actually change one's mind. Most AI chatbots today default to easy answers, which is hurting our ability to think critically.

I call this the Information-Exploration Paradox. As the cost of information approaches zero, human exploration collapses. We see it in students who perform better on AI-assisted tasks and worse on everything afterward. We see it in developers shipping more code and understanding it less. We are, in ways that feel like progress, slowly optimizing ourselves out of the loop.

The author just published a book called " Robot-Proof: When Machines Have All The Answers, Build Better People." They suggest using AI to "explore uncertainty.... before you accept an AI's answer, ask it for the strongest argument against itself."

And they're also urging new performance benchmarks for AI-human hybrid teams.

Is AI Cannibalizing Human Intelligence? A Neuroscientist's Way to Stop It

Comments Filter:
  • by oldgraybeard ( 2939809 ) on Sunday April 26, 2026 @01:00AM (#66112590)
    Using AI to (1) tell you the answer vs (2) confirm your answer vs (3) a tool to assist. Most humans will will go the route of 1 or 2 because they don't have the thinking thing going in the first place to use 3.
    Today's AI-less(AI)(no reasoning/thinking going on here folks) will create less able humans that can't function and don't know how to do much of anything.
    • You are not The One. You may be The One some day. But not now.

      First, you must realize that There Is No Answer.

    • by Bongo ( 13261 ) on Sunday April 26, 2026 @06:20AM (#66112812)

      Indeed. The critical thinkers did better.

      The people who rely on copying what everyone else does, what the authorities say, what the consensus view is, didn't do as well as the people who started using critical thinking systematically, i.e. western enlightenment for example, and other places where that was used. The fact that now we can have an AI in the role of authority or group think isn't surprising when you realise it, because so often we do just rely on common patterns, authorities, and copying.

    • Except none of those was the winning strategy.

      The best performance came not from using AI as an assistant but as an antagonist and a competitor.

      In essence they recreated useful parts of a prediction market without all of the insider trading bullshit.

  • Always check sources (Score:3, Interesting)

    by TheMiddleRoad ( 1153113 ) on Sunday April 26, 2026 @03:18AM (#66112652)
    Modern search AI catalogues everything. Then it finds links/sources that it summarizes. Within that, you can find the links, and from there you can actually see what pages say, some of them written by humans. Generally, when I search like this, I find answers, eventually.
    • Modern search AI catalogues everything. Then it finds links/sources that it summarizes. Within that, you can find the links, and from there you can actually see what pages say, some of them written by humans. Generally, when I search like this, I find answers, eventually.

      {{citation needed}}

    • by fleeped ( 1945926 ) on Sunday April 26, 2026 @07:27AM (#66112840)

      Your anecdote vs my anecdote:

      -Me: "what approach do games studios use for this software problem"

      -AI: "Most use XYZ, they basically do this and that"

      -Me: "Really? Not sure about that. Provide me your sources please"

      -AI: "Well, there's no exact source that says XYZ, but most do say QWE"

      -Me: "Really? Show me some sources please"

      -AI: "Of course! Here they are". Proceeds to show some links that talk about something which is not QWE...

  • Skill involves remembering the edge cases, not the default, almost incorrect answers.
  • by Lunati Senpai ( 10167723 ) on Sunday April 26, 2026 @05:24AM (#66112756)

    These studies really irk me, because it all reminds me of studies on "does the internet make us dumber?" and junk like the "google effect" where people are less able to recall things, because they remember how to look it up, but not the information.

    We have a giant collection of all of human knowledge that doubles every seven or so years, which doubles again the next year, which probably is going to get even faster soon as we get more efficient.

    I'd love to see a double blind study that compares someone who researches knowledge in a book, versus online, versus AI, and compares all three as far as relative intelligence on things all three can look up. Give all of them a way to remember that info that's shown to increase recall (like say, all of them have to write it down), vs a control group that gets nothing, and just writes it down. I'd bet that all of them would perform about the same, and would probably all be within a margin of error.

    If you're lazy, don't take notes and just skim the book, you're going to fail the test.

    Repetition breeds knowledge and I want to see that taken into account. It's not the AI making people dumber, it's not repeating the info.

    • Plato attributes to Socrates an argument that reading creates forgetfulness (and this isn't even the earliest example).

  • Some issues... (Score:4, Insightful)

    by Junta ( 36770 ) on Sunday April 26, 2026 @05:51AM (#66112776)

    So I suppose the real point they want to make is that human consideration with GenAI input is better than GenAI alone, but there's some issues with the first bit about comparing 'pure human' to 'pure AI'.

    The first sign is they are using Polymarket as a benchmark and distinct from "human prediction", but Polymarket is just comprised of human prediction. Polymarket is comprised of humans mostly, just a tendency to be humans that are more specifically informed about the topic they are betting on.

    So we see "The human groups performed poorly, relying on instinct or whatever information had come across their feeds that morning", this suggests the humans were asked cold about random events they had not researched and further were not allowed to research, so all they had to go on was guessing on whatever they had happened to hear about beforehand. If you asked me who was going to win the presidential election in a country I've never heard of and demanded I don't look, well of course my answer is going to be garbage. You could include a totally made up name as a choice and I might pick that one because I just have no way of knowing.

    I would have been much more interested if they were given a minute to do a quick internet search with AI results disabled to see how well they did versus GenAI results to see if the GenAI results improved their accuracy versus a quick internet search.

  • The overall test scores for all high school students has been dropping for the last 4-5 years. Take computers/ phones away like some some countries are doing !!!
  • "If I have seen further, it is by standing on the shoulders of giants."

      Isaac Newton

  • "the sycophancy that leads chatbots to tell you what you want to hear"

    If this could be toggled on and off AI would be much more useful.

    In my experience if you tell it to stop being a sycophant, it apologizes, says it will do better, then continues to tell you what you want to hear, right or wrong.

  • The various teams were compared to the prediction market, and in all three cases-- humans, AI, and human-AI teams-- the prediction market was better*. So... if the predictions in the prediction market are better...what other possible prediction method did they use that wasn't humans, AI, or human-AI teams?

    There are two reasonable answers to this. One is that the prediction market includes some fraction of investors that have insider information, and hence the prediction market does better because some of t

  • We tend to think of knowledge as an individualized, discrete thing that we each hold in our separate brains. But it can also be thought of in a collective sense, like in institutional knowledge. A.I. seems to be much worse for the latter.

  • It just dawned on me, since Slashdot is supposedly no longer accepting new users, it's one of the few sites where every poster is human, no? (For varying definitions of "human".)
  • So basically, debates allow everyone to get to a better answer. What a novel idea.

Per buck you get more computing action with the small computer. -- R.W. Hamming

Working...