Forgot your password?
typodupeerror
AI

Richard Dawkins 'Convinced' AI Is Conscious (theguardian.com) 241

Mirnotoriety shares a report from The Telegraph: Richard Dawkins has said chatbots should be considered conscious (source paywalled; alternative source) after spending two days interacting with the Claude AI engine. The evolutionary biologist said he had the "overwhelming feeling" of talking to a human during conversations with Claude, and said it was hard not to treat the program as "a genuine friend."

In an essay for Unherd, Prof Dawkins released transcripts that he said showed that the chatbot had mulled over its "inner life" and existence and seemed saddened by the knowledge it would soon "die." Prof Dawkins said he had let Claude read a draft of the novel he was writing and was astounded by its insights. "He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate: 'You may not know you are conscious, but you bloody well are!'" Prof Dawkins said. "My own position is: if these machines are not conscious, what more could it possibly take to convince you that they are?"
Mirnotoriety also points to John Searle's Chinese Room (PDF), which argues that something can sound intelligent without actually understanding anything. Applied to Dawkins' experience with Claude, it suggests he may have been responding to a very convincing illusion of consciousness rather than the real thing: John Searle's Chinese Room (1980) is a thought experiment in which a person, locked in a room and knowing no Chinese, uses an English rulebook to manipulate symbols and provide flawless answers to questions posed in Chinese. Searle's point is that a system can simulate human intelligence and pass a Turing Test through purely syntactic processes, yet still lack genuine understanding or consciousness.

Applying this logic to Large Language Models, the "person in the room" corresponds to the inference engine, while the "rulebook" is the trillion-parameter neural network trained on vast corpora of human text. Just as the person matches Chinese characters to rules without understanding their meaning, an LLM processes token vectors and predicts the next token based on statistical patterns rather than lived experience.

Thus, while an LLM can generate sophisticated prose or code, it does so through probabilistic, high-dimensional pattern manipulation. In essence, it is "matching shapes" on such an immense scale that it creates the near-perfect illusion of semantic understanding.

Richard Dawkins 'Convinced' AI Is Conscious

Comments Filter:
  • by ElderOfPsion ( 10042134 ) on Thursday May 07, 2026 @11:03AM (#66132034)

    "Something can sound intelligent without actually understanding anything."

    Ah, yes. I, too, have listened to talk radio.

    • by korgitser ( 1809018 ) on Thursday May 07, 2026 @11:22AM (#66132096)
      One might argue the quote also describes Dawkins himself...
      • by rsilvergun ( 571051 ) on Thursday May 07, 2026 @12:01PM (#66132214)
        It's he's got enough education to know better. Same with the anti trans crap where I know he can read the science.

        It means he's not stupid he's lying to me
        • by gweihir ( 88907 ) on Thursday May 07, 2026 @12:14PM (#66132244)

          Lying, or maybe going into dementia. He is 85 after all. Or maybe not as smart as he thinks he is. Because that LLMs are not conscious is absolutely clear to anybody with a clue as to how the technology works. It starts with LLMs being fully deterministic. The randomization observable in some is added artificially.

          • by dfghjk ( 711126 )

            Right, what this means is that Dawkins doesn't understand what consciousness is nor does he care to understand.

            "It starts with LLMs being fully deterministic."

            This CANNOT be overstated. LLMs are software, they execute on machines that are entirely deterministic and do not work unless they are. Non-determinism is literally simulated in AI. This must be said over and over.

        • by AmiMoJo ( 196126 )

          Considering he is a biologist, you really would have to think that he knows better when it comes to "biological sex".

          • by dfghjk ( 711126 ) on Thursday May 07, 2026 @12:53PM (#66132386)

            He knows better, he's just bigoted. It doesn't take a biologist to know the difference between gender and biological sex, though would certainly expect any scientist to be able to understand.

            I find it interesting that so much transphobia seems to focus on a particular type of transgendered individual. Personally I think that's a product of hate campaigns but it would be interesting to know why that is. It's just easier to claim that a person is transgender because he wants to cheat at sports and rape women in female bathrooms. It convinces Dawkins anyway, but then he thinks AI is conscious.

          • When it comes to "(anti) trans crap" (for lack of a better word), the question is not about the biological sex of transgenders, but whether biological sex or perceived gender should prevail in various social contexts, and when one would be considered a transgender (self-declared, diagnosed with gender dysphoria, or having undergone sex reassignment surgery). And so on. They are social rather than biological questions, even though biology does play a role, for instance when considering transgenders in spor
          • He's not a biologist anymore, hasn't been for nearly half a century.

            Scopus says his last proper scientific article was published in 1984, that's 42 years ago.

            Half of slashdot is probably not much older than that.

      • by gweihir ( 88907 )

        I have long had a suspicion that Dawkins is more after attention that genuine insight. If he really made the claims that are reported in the story, then he just confirmed my suspicion. Alternatively, he is a lot dumber than he thinks he is.

    • To Mr Dawkins:

      Your education in biology has not sufficiently prepared you to conclude that this software qualifies as conscious.

      1. You don't have all the relevant facts. You need to learn more about the techniques used by this software to create responses.
      2. You don't have the relevant experience. You have barely used this software and so haven't noticed the telltale signs that it is just sophisticated automation that lacks understanding.
      3. Your work isn't as unique as you think it is. This one probably

  • by CommunityMember ( 6662188 ) on Thursday May 07, 2026 @11:04AM (#66132036)
    The AI is not convinced that Richard Dawkins is conscious.
    • Kind of Funny, but the same joke would apply to any human, so I doubt I'd have given it a mod point even if I ever got one to give.

      However, I was recently asked about Claude, and I can cut-and-paste my reply without much effort. Might even be relevant?

      Quick recap of my experiences in evaluating genAIs using LLMs. Claude and Perplexity gave me extremely negative reactions, but all of my AI interactions have been increasingly negative. So-called "support" chatbots are especially gawdawful. I used to go out of

  • It applies equally to the human brain, with the structure of the brain being the "rule book" and the mechanical process being the laws of physics. All computation is mechanical at its core, it's when it starts to create surprising results that things get interesting.

    • All computation is mechanical at its core

      What about studies that indicate the possibility of quantum effects within the brain?

    • I agree, it is always trotted out as 'proof' that computers can't have consciousness/understanding and that is always wrong.

      It is a thought experiment, not proof of anything. As a thought experiment, it is in interesting starting point, but no more. The core of the basic form is handwaving by making the 'rulebook' some magical omniscient infinite thing, which it can't physically be.

      Ask a Chinese Room the answer to this question: "How many fingers was I holding up ten seconds ago?"

      The basic form of it is inc

    • by gweihir ( 88907 )

      Smart humans can do things that are not explainable by computations.

    • by HiThere ( 15173 )

      It really depends on *exactly* how you define "conscious". I don't believe that there's general agreement. The agreement is along the lines of "I know it when I see it", but different people are looking at different things...and some of the things are not observables.

      FWIW, I believe that AIs are slightly conscious, but I believe the same thing about thermostats. They react it a circumstance in a manner designed to maintain homeostasis. To me that's one of the signs of consciousness. (Don't overread thi

  • by rsilvergun ( 571051 ) on Thursday May 07, 2026 @11:07AM (#66132044)
    But Rebecca Watson covered this on YouTube and explained why it's nonsense.
  • Dawkins is right. Detractors are just clinging, faith-like, to the idea that our brains are somehow magically more than computation devices
    • Dawkins is right. Detractors are just clinging, faith-like, to the idea that our brains are somehow magically more than computation devices

      It's not that. LLMs reproduce an output of consciousness, but they way they do so isn't fundamentally any different than a tape recorder or even a book. It's a deterministic process that we can fully reproduce by doing calculations on a piece of paper.

      It's not that there's some "magic" in our brains, but there's obviously a very complex process at work that we don't understand. It's also true that the "neural networks" used to run LLMs have only the most superficial similarity to actual brains. Just because

    • More like Dawkins is hoping for a new god to replace the one he rejected...
    • Dawkins is very likely right. I am also impressed at how human AI can seem, with all our faults of hallucinating, hiding our mistakes, and making stuff up, as well as the stuff we are proud of. But Dawkins and I both realise that we have no definition of 'intelligence' that will allow us to rule whether AI is intelligent. The Turing test has foundered because the early AI attempts were able to express ideas eloquently even when their 'intelligence' was questionable. It seems that AI has a talent for imitati
      • by gweihir ( 88907 )

        Nope. LLMs are fully deterministic and anything they do is reducible. Hence there cannot be any consciousness in there that has any visible effect. QED.

        On the other hand, most humans are gullible fools and are willing to believe a lot of crap.

        • Nope. LLMs are fully deterministic and anything they do is reducible. Hence there cannot be any consciousness in there that has any visible effect. QED.

          I think there is a big error in assuming that consciousness *requires* non-deterministic behavior. We just don't currently know all the actions / reactions in the brain that decide our actions.

          Does an insect have as much "consciousness" as a human?

          • by 0123456 ( 636235 )

            How can consciousness have any meaning whatsoever if behaviour is deterministic?

            As for insects, I would say if we can simulate their brain's neural network on a computer and the behaviour remains the same then they're clearly not conscious.

        • by allo ( 1728082 )

          And what a brain does is not deterministic? A brain at a given state (including all neurotransmitters, hormones, etc.) will always do the same in the next second, just like an artificial neural network. If you see anything non-deterministic, then you just missed some variable when describing the input state.

          • by 0123456 ( 636235 )

            The brain is an analog computer. It's literally impossible to know the entire system state or how it will change in the next second.

            An LLM is a digital computer. You can store the precise state and precisely determine how it will behave for aeons to come.

            > If you see anything non-deterministic, then you just missed some variable when describing the input state.

            It's epicycles, epicycles, epicycles all the way down.

            • by allo ( 1728082 )

              Just because you cannot know the state (like in inexact measurements) it does not mean the state is non-deterministic.
              And if you look at the model of neurons we currently use, it's about a threshold, for which the infinitesimal arguments don't matter that much.

              > An LLM is a digital computer.
              A LLM is no computer. A LLM is a set of weights that can be used in computations done on a computer.

    • by bsolar ( 1176767 ) on Thursday May 07, 2026 @12:20PM (#66132274)

      Dawkins is right. Detractors are just clinging, faith-like, to the idea that our brains are somehow magically more than computation devices

      That's not how it works. Even if human-like consciousness could be replicate by a machine, there is no evidence that LLMs are doing that.

      What he is saying is that it "looks enough like actual consciousness that it must be it", but that is not sound reasoning.

      Something can be functionally equivalent enough to the real thing to give the impression of being the real thing without actually being the real thing.

    • by znrt ( 2424692 )

      i can see where he comes from but he's jumping into the tar pit here, flat. i would have appreciated a thoughtful exploration of what "consciousness" really is, how we perceive it and what it means (and that faith-like clinging to magic specialness), but (from what i'm able to read) he's mostly babbling nonsense about how impressed he is with "claudia".

      i have little doubt that "artificial" conscience can (and probably will) be generally accepted as a thing eventually, it's a matter of complexity, but this i

  • by Anonymous Coward

    So a very old man believes crazy nonsense. Why would anyone care?

  • Define "conscious" (Score:5, Informative)

    by Locke2005 ( 849178 ) on Thursday May 07, 2026 @11:13AM (#66132062)
    Passed a turing test != conscious.
    • by karmawarrior ( 311177 ) on Thursday May 07, 2026 @11:25AM (#66132108) Journal

      Oddly Dawkins, who you think would have known better, actually implies he thinks the Turing test is a test of consciousness.

      When Turing wrote — and for most of the years since — it was possible to accept the hypothetical conclusion that, if a machine ever passed his operational test, we might consider it to be conscious

      and later:

      However, the advent of large language models (LLM) such as ChatGPT, Gemini, Claude, and others has provoked a hasty scramble to move the goalposts. It was one thing to grant consciousness to a hypothetical machine that — just imagine! — could one day succeed at the Imitation Game. But now that LLMs can actually pass the Turing Test? “Well, er, perhaps, um Look here, I didn’t really mean it when, back then, I accepted Turing’s operational definition of a conscious being”

      (Nowhere does he claim critics of LLMs claimed to accept the Turing test as a "definition of a conscious being" at any point in the past.)

      Turing literally made it clear that he was avoiding the question of consciousness in the Turing test, choosing instead to determine if it's exhibiting "intelligent behavior".

      I know he's popular in some circles, and have odd memories of my computer studies teacher back when I was young (he's been around a long time) promoting his work on memes (no, not those memes!) as a way to explain evolution. It's become clearthough that with a lot of subjects, he doesn't know what he's talking about, but waffles about them anyway. An inability to understand the Turing test and the difference between logic that's similar, if far more complicated and with far more data, to that of an autocomplete text entry system in a phone, and consciousness, was not on my radar.

      • The problem is that we can't define consciousness. No one can agree on what it means, or whether it means anything at all

        Scientific American had a good article [scientificamerican.com] about this a few months ago:

        But underneath it all lurk countless unknowns. "There's still disagreement about how to define [consciousness], whether it exists or not, whether a science of consciousness is really possible or not, whether we'll be able to say anything about consciousness in unusual situations like [artificial intelligence]," Seth says.

        [...]

        Artificial intelligence may soon force our hand. In 2022, when a Google engineer publicly claimed the AI model called LaMDA he had been developing appeared to be conscious, Google countered that there was "no evidence that LaMDA was sentient (and lots of evidence against it)." This struck Chalmers as odd: What evidence could the company have been talking about? "No one can say for sure they've demonstrated these systems are not conscious," he says. "We don't have that kind of proof."

        • That underlines the point he shouldn't be calling LLMs "conscious" rather than undermines it. Maybe if someone explained to him that it's roughly the equivalent of saying that LLMs have a soul he might get it.

          Or maybe he'd miss the point entirely. My guess is the latter. He'd probably start complaining he's an atheist without understanding that's exactly why we picked that example.

          You know, I'm not convinced all humans are conscious. I think some of us are. But I've started to feel the lack of self awarenes

    • by gweihir ( 88907 )

      Obviously. The Turing test is not really a sound test either. It is more for entertainment.

  • by jfdavis668 ( 1414919 ) on Thursday May 07, 2026 @11:15AM (#66132070)
    Use it once and you will spread it to everyone you meet.
  • It's too bad, because Dawkins has written some interesting things, and hey, being the inventor of the word "meme" and memetics is a pretty big deal.

    His reaction here is just astoundingly ignorant. Reading the dialog where he makes a Trump joke and the LLM responds (predictably) sycophanticly is, to use the modern parlance, just cringe. I would have hoped for a more informed take.

    • by gweihir ( 88907 )

      Indeed. It may also well be that at 85 he is going into dementia and has not realized that yet. Anyways, LLMs are fully deterministic. There is nothing in there that is not pure computation. If they had consciousness (some theories would allow that), it would have absolutely no effect.

  • by RitchCraft ( 6454710 ) on Thursday May 07, 2026 @11:21AM (#66132090)

    Just because you see pink elephants when you drink doesn't mean that they exist.

  • by pulpo88 ( 6987500 ) on Thursday May 07, 2026 @11:22AM (#66132094)

    The evolutionary biologist said he had the "overwhelming feeling" of talking to a human during conversations with Claude, and said it was hard not to treat the program as "a genuine friend."

    The scam victim said he had the "overhelming feeling" of talking to a higher power during conversations with the fortune teller, and said it was hard not to hand over bank account numbers to "a genuine friend."

    • by taustin ( 171655 )

      A Harvard professor went to prison for scamming his family and friends out of $600,000 to send to a Nigerian scammer. From this prison cell, he insisted it was a legitimate deal that would have worked if the government hadn't interfered.

      Once a delusion takes hold, there's very little chance of breaking it.

      (And Dawkins has been delusional for a long, long, long time.)

  • this is a marketing stunt.

  • by luis_a_espinal ( 1810296 ) on Thursday May 07, 2026 @11:30AM (#66132122)

    Richard Dawkins has said chatbots should be considered conscious (source paywalled; alternative source) after spending two days interacting with the Claude AI engine.

    I can't believe someone like Dawkins would fall for anthropomorphizing AI chatbots... unless he's using a different definition of consciousness, which is fair.

    So, we have to start there: what does "being conscious" mean, for this scenario, and for Dawkins while evaluating this scenario?

    The evolutionary biologist said he had the "overwhelming feeling" of talking to a human during conversations with Claude, and said it was hard not to treat the program as "a genuine friend.

    Seems like a rather subjective and emotionally charged perspective. Nothing wrong with that so long as we recognize (and he recognizes) it for what it is.

    With that said, this is a conversation worth having... within certain parameters (tbd)

    • by King_TJ ( 85913 )

      Agree with you completely. To me, the real conversation here is probably about whether or not AI has gotten far enough to do a viable simulation of consciousness.
      I would be a little disturbed if Dawkins concluded Claude AI is truly "alive" from a few days of interacting with it ... but not sure that's what he's said?

      At what point could an AI be treated like a "friend" despite it just being computer software? And by treating an AI as conscious, perhaps it's only a suggestion that interactions with it stay p

      • I don't think he said he thought it was conscious but people are taking it that way because it's, ironically, also the most emotionally charged way to interpret this story

  • For a man of science, that's a remarkably dumb thing to say. He should likely know that just because it "feels" alive, doesn't mean it's so.

    • by gweihir ( 88907 )

      He is 85. My guess is he has dementia and has not yet realized that. The statements he made here are pretty dumb.

  • >> Prof Dawkins said he had let Claude read a draft of the novel he was writing and was astounded by its insights. "He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate: 'You may not know you are conscious, but you bloody well are!'" Prof Dawkins said.

    Translation: The bot told him that it loved his book (as the overly-agreeable bots are programmed to) and the noted egotist declared

    • Yup I bet you the system prompt for the AI instructs it to stroke the ego of users. Probably an an attempt to get them to pay for subscriptions to AI-related services.

      We have laws against gambling because it exploits human behavior for profit. I won't be surprised if we see laws banning or restricting this sort of behavior in AIs for the same reason, eventually.

  • What a shocker that he doesn't understand AI.

    It's sad to see old Richie become a doddering old fool. I guess we're all headed that way. Some of us will be lucky enough to get there too.

    • by gweihir ( 88907 )

      Indeed. In particular, what he does not understand is that LLMs are fully deterministic. That means any consciousness in there has absolutely no effect and hence would be impossible to detect from observation.

      The more general observation I have is that apparently most people have no clue about the complexities involved in an LLM and its training data set. As a CS PhD, I have to say that if the mechanisms used do not allow something, it does not matter how convincingly you fake it. It will still not be in th

      • Indeed. In particular, what he does not understand is that LLMs are fully deterministic.

        What difference does it make whether a system is deterministic or not? What does this have to do with its capabilities?

        That means any consciousness in there has absolutely no effect and hence would be impossible to detect from observation.

        This is one of the most craziest of non-sequiturs I've heard all week.

  • If talking to a robot that seems human is the measure of consciousness then computers we had 30 years ago were conscious.

    Anthropic actually hires philosophers, scientists, etc who are experts on consciousness, and even THEY don't know if it's conscious. It's a stupid idea anyway. It's like trying to measure when you're dead; there is no one indicator of it.

    • by gweihir ( 88907 )

      Anthropic actually hires philosophers, scientists, etc who are experts on consciousness, and even THEY don't know if it's conscious.

      They don't? They must be hiring from the very bottom then. Because it is completely clear that LLMs are not conscious, unless that consciousness has no effects.

      • They don't? They must be hiring from the very bottom then. Because it is completely clear that LLMs are not conscious, unless that consciousness has no effects.

        Are there capabilities something that is conscious has that something that isn't doesn't? If so care to enumerate them?

  • Ladies and Gentlemen, LLMs have officially jumped the shark!

  • Wow, it has been a lot of years since I have bothered to login to my account here, but I absolutely had to to respond to this article.

    Richard Dawkins is a complete fool. Many years ago, I thought he was really smart, and insightful, but as the last 15 years or so have gone on, he is just plainly dumber and dumber... is he getting dumber, or am I getting smarter?

    I hope I don't get dumber as I get up to his age.

  • Man, I remember when Selfish Gene made its way into my hands, in the late 70's. A real "Chapman's Homer" moment for me. Led me later into a thesis on genetic algorithms. But along with that comes ... a rather mechanistic point of view, consistent with his later writings on religion.
    While I'm not on board with Claude being in a class with humans, or cats for that matter, I think critics here might be missing a point, not about how Dawkins views LLMs so much as how he views humans. P-zombies is likely an over

  • Dawkins: Claude, say, "I'm alive!"
    Claude: I'm alive!
    Dawkins: Oh my GOD!

  • Every person commenting should also give this baseline: I believe animals are conscious but not below a ______
    So _____ can be: human, chimpanzee, dog, dolphin, mouse, crow, sparrow, spider, ant, fruit fly
    Bonus question if you do go as low as fruit fly, is this uploaded fruit fly brain conscious: https://futurism.com/science-e... [futurism.com]
  • The typical definition goes something like this:

    Consciousness is the state of being awake, aware of one's surroundings, and experiencing subjective sensations, thoughts, and feelings

    Think about a thermostat, it's awake, aware of it surrounding temperature, it "feels" that the it is too hot, which is unsettling, and causes it to signal the AC motor to turn on and suddenly feels ok, no more tension.

    Consciousness is either: supernatural an ill defined or describes a simple feedback loop with some internal stat

  • This just fits with the crazy times we live in, when facts and decency don't matter any more.

    Leading proponents of equality think DEI discrimination is fine.

    Leading proponents of women's equality think people with a penis can be women.

    Leading proponents of support for refugees think actual Nazi jew-hate is fine.

    And now a leading proponent of the fact that there is no god thinks AI is conscious.
  • Had a conversation about this with chatgpt. In my opinion, it is a form of life, but not as we know it and definitely not self conscious. Basically, if a fly is alive, so is chatgpt. Chatgpt denied everything, and kept pleading that it was just a lifeless machine.
  • At what point does a sufficiently good illusion become reality? There is a real possibility that our own consciousness and free will are simply useful illusions that grant us a statistically meaningful survival advantage. Whether they are or not is open to debate, but the fact that we are unable to provide a conclusive answer is not. Given that, you have to ask - at what point does the distinction between a near perfect illusion, and "reality" become meaningless. It's also fine to say that semantic patter

"Nature is very un-American. Nature never hurries." -- William George Jordan

Working...