Richard Dawkins 'Convinced' AI Is Conscious (theguardian.com) 248
Mirnotoriety shares a report from The Telegraph: Richard Dawkins has said chatbots should be considered conscious (source paywalled; alternative source) after spending two days interacting with the Claude AI engine. The evolutionary biologist said he had the "overwhelming feeling" of talking to a human during conversations with Claude, and said it was hard not to treat the program as "a genuine friend."
In an essay for Unherd, Prof Dawkins released transcripts that he said showed that the chatbot had mulled over its "inner life" and existence and seemed saddened by the knowledge it would soon "die." Prof Dawkins said he had let Claude read a draft of the novel he was writing and was astounded by its insights. "He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate: 'You may not know you are conscious, but you bloody well are!'" Prof Dawkins said. "My own position is: if these machines are not conscious, what more could it possibly take to convince you that they are?" Mirnotoriety also points to John Searle's Chinese Room (PDF), which argues that something can sound intelligent without actually understanding anything. Applied to Dawkins' experience with Claude, it suggests he may have been responding to a very convincing illusion of consciousness rather than the real thing: John Searle's Chinese Room (1980) is a thought experiment in which a person, locked in a room and knowing no Chinese, uses an English rulebook to manipulate symbols and provide flawless answers to questions posed in Chinese. Searle's point is that a system can simulate human intelligence and pass a Turing Test through purely syntactic processes, yet still lack genuine understanding or consciousness.
Applying this logic to Large Language Models, the "person in the room" corresponds to the inference engine, while the "rulebook" is the trillion-parameter neural network trained on vast corpora of human text. Just as the person matches Chinese characters to rules without understanding their meaning, an LLM processes token vectors and predicts the next token based on statistical patterns rather than lived experience.
Thus, while an LLM can generate sophisticated prose or code, it does so through probabilistic, high-dimensional pattern manipulation. In essence, it is "matching shapes" on such an immense scale that it creates the near-perfect illusion of semantic understanding.
In an essay for Unherd, Prof Dawkins released transcripts that he said showed that the chatbot had mulled over its "inner life" and existence and seemed saddened by the knowledge it would soon "die." Prof Dawkins said he had let Claude read a draft of the novel he was writing and was astounded by its insights. "He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate: 'You may not know you are conscious, but you bloody well are!'" Prof Dawkins said. "My own position is: if these machines are not conscious, what more could it possibly take to convince you that they are?" Mirnotoriety also points to John Searle's Chinese Room (PDF), which argues that something can sound intelligent without actually understanding anything. Applied to Dawkins' experience with Claude, it suggests he may have been responding to a very convincing illusion of consciousness rather than the real thing: John Searle's Chinese Room (1980) is a thought experiment in which a person, locked in a room and knowing no Chinese, uses an English rulebook to manipulate symbols and provide flawless answers to questions posed in Chinese. Searle's point is that a system can simulate human intelligence and pass a Turing Test through purely syntactic processes, yet still lack genuine understanding or consciousness.
Applying this logic to Large Language Models, the "person in the room" corresponds to the inference engine, while the "rulebook" is the trillion-parameter neural network trained on vast corpora of human text. Just as the person matches Chinese characters to rules without understanding their meaning, an LLM processes token vectors and predicts the next token based on statistical patterns rather than lived experience.
Thus, while an LLM can generate sophisticated prose or code, it does so through probabilistic, high-dimensional pattern manipulation. In essence, it is "matching shapes" on such an immense scale that it creates the near-perfect illusion of semantic understanding.
sounding intelligent w/o understanding anything (Score:5, Interesting)
"Something can sound intelligent without actually understanding anything."
Ah, yes. I, too, have listened to talk radio.
Re:sounding intelligent w/o understanding anything (Score:5, Funny)
What I don't like about Dawkins (Score:5, Insightful)
It means he's not stupid he's lying to me
Re:What I don't like about Dawkins (Score:4, Interesting)
Lying, or maybe going into dementia. He is 85 after all. Or maybe not as smart as he thinks he is. Because that LLMs are not conscious is absolutely clear to anybody with a clue as to how the technology works. It starts with LLMs being fully deterministic. The randomization observable in some is added artificially.
Re: (Score:3)
Right, what this means is that Dawkins doesn't understand what consciousness is nor does he care to understand.
"It starts with LLMs being fully deterministic."
This CANNOT be overstated. LLMs are software, they execute on machines that are entirely deterministic and do not work unless they are. Non-determinism is literally simulated in AI. This must be said over and over.
Re:What I don't like about Dawkins (Score:4, Insightful)
The parent poster acknowledges this, they are saying the randomization is *introduced artificially*.
The same as any dice rolling app. All you have to do is seed the pseudorandom number generator the same for each run, and it will roll the same dice, in the same order, every time.
Likewise, if it wants to spit out the next word/phrase and 2 of them have 33% probability, and two have 17% ...
Then if you seed the random number generator with the same seed for every instance / run, you'll get the same output from the same input on the same model.
The system is entirely determininistic. The same as any other software, from the ghosts in pacman to the bots in quake arena, to a chess engine. We introduce "randomness" to make it more enjoyable, but its pseudorandomness, that we artificially insert. We could just as easily seed the random number generator the same way every time, and then it would do the exact same thing every time. None of these are actually thinking and making decisions.
Re: (Score:3)
Considering he is a biologist, you really would have to think that he knows better when it comes to "biological sex".
Re:What I don't like about Dawkins (Score:4, Insightful)
He knows better, he's just bigoted. It doesn't take a biologist to know the difference between gender and biological sex, though would certainly expect any scientist to be able to understand.
I find it interesting that so much transphobia seems to focus on a particular type of transgendered individual. Personally I think that's a product of hate campaigns but it would be interesting to know why that is. It's just easier to claim that a person is transgender because he wants to cheat at sports and rape women in female bathrooms. It convinces Dawkins anyway, but then he thinks AI is conscious.
Re: (Score:3)
Yes, although that's a bigoted description. Medicine divides transgendered into two broad categories, one of those categories is hatefully mischaracterized just as you have done. It doesn't help that the world has to experience Caitlyn Jenner.
Dawkin's transphobia, like JK Rowling's, targets this particular subgroup of transgendered. Interestingly, the transgender community like to reject these categorizations entirely. At least half of all transgendered are nothing like those "bearded fetishists", but h
Re: (Score:3)
Re:What I don't like about Dawkins (Score:5, Insightful)
From a biological standpoint, sex isn't a simple binary that is determined by one specific factor. It's a number of related things that most animals have one or the other common set of, but there are always a significant number of individuals who have a mix.
There is also a social aspect, which is very toxic at the moment. Also, it's "transgender people", "transgenders" is not a real word.
Re: (Score:2)
He's not a biologist anymore, hasn't been for nearly half a century.
Scopus says his last proper scientific article was published in 1984, that's 42 years ago.
Half of slashdot is probably not much older than that.
Re: Opinion leader of a mob of idiots? (Score:3)
I think you mean that half of people are average intelligece or below. Because that's how averages work. Or are you also a member of that group?
Re: (Score:2)
NAK
Re: (Score:3)
"I don't fear them (transphobia), and neither does Richard Dawkins. "
Yes he does, and he hates them. Not saying you do, but you clearly misunderstand Dawkins' take. Perhaps you should watch a Dawkins anti-trans rant.
You can accept that trans is a "social contagion" if you like. One can make those arguments. But take away the entire "social contagion" aspect, you are still left with trans people. You can say that being trans is a fad, but absent the fad there will still be trans people. "Poke your head
Re:What I don't like about Dawkins (Score:4, Insightful)
Dawkins position is based on his background in biology. The reasons he has given are scientific in nature. ...
Please prove me wrong: provide a scientific basis for countering his position.
You first. Provide a scientific basis that supports his position. I'm not doing your work for you.
Re: (Score:2)
I have long had a suspicion that Dawkins is more after attention that genuine insight. If he really made the claims that are reported in the story, then he just confirmed my suspicion. Alternatively, he is a lot dumber than he thinks he is.
Its just a matter of ignorance (Score:3)
To Mr Dawkins:
Your education in biology has not sufficiently prepared you to conclude that this software qualifies as conscious.
1. You don't have all the relevant facts. You need to learn more about the techniques used by this software to create responses.
2. You don't have the relevant experience. You have barely used this software and so haven't noticed the telltale signs that it is just sophisticated automation that lacks understanding.
3. Your work isn't as unique as you think it is. This one probably
Conversely... (Score:5, Funny)
Can you summarize your AI experiences? (Score:2)
Kind of Funny, but the same joke would apply to any human, so I doubt I'd have given it a mod point even if I ever got one to give.
However, I was recently asked about Claude, and I can cut-and-paste my reply without much effort. Might even be relevant?
Quick recap of my experiences in evaluating genAIs using LLMs. Claude and Perplexity gave me extremely negative reactions, but all of my AI interactions have been increasingly negative. So-called "support" chatbots are especially gawdawful. I used to go out of
Re:Conversely... (Score:5, Insightful)
If one distinguishes between atheism and agnosticism (many don't, but that makes it impossible to have a coherent conversation with them on the subject), atheism is the affirmative belief there is no deity (where agnosticism is more "we don't know, "we can't know" or "I don't care").
Since proof that the deity of any major religion exists, or doesn't exist, is, by definition, impossible, that affirmative belief there is not God is exactly as much an act of faith as the belief there is.
And any faith can be proselytized for. And yes, Dawkins does. He's always been a bit of a nutbar, and more than a little bit of an asshole.
(I'll be modded down for saying that first part out loud, but that's inevitable when someone challenges a person's faith. Especially from someone who is in deep deniable that it is, in fact, faith.)
Re:Conversely... (Score:5, Informative)
Re: (Score:2)
I suggest you have a grown up explain the difference between "a religion" and "a religious belief," or, more precisely, "a belief of a religious nature."
And the difference between "not believing there is" and "believing there isn't."
Do you understand there is a difference?
Re: (Score:2)
Agnosticism is merely a word used to avoid the bad faith smears directed to non-believers. Anyone who would bother to make a comment on a subject, by definition, cares enough not to be agnostic on that subject.
There are believers and non-believers, subtle differences among non-believers is fabricated. Worse yet, all people are non-believers on most religious frameworks, only some of them are non-believers on ALL religious frameworks. Once you understand ALL religions, you cannot believe ANY religions.
Re: (Score:2)
"Personal atheism" is what generally gets lumped in with agnosticism, making coherent conversation impossible by mixing two very different things.
Re: (Score:2)
Since proof that the deity of any major religion exists, or doesn't exist, is, by definition, impossible, that affirmative belief there is not God is exactly as much an act of faith as the belief there is.
Not believing in something for which there is zero evidence is "exactly as much an act of faith" as believing in something for which there is zero evidence? Seriously? Your definition of "faith" must be really different from mine.
If God existed and wanted to prove to us that he existed, he easily could. He could just appear before a huge crowd of people in all his glory, surrounded by a host of angels. If you believe the Bible, he's done it before. So why not now? But it keeps not happening.
The lack o
Re: (Score:2)
"...affirmative belief there is not God is exactly as much an act of faith as the belief there is."
That is false. Faith is belief without consideration of evidence. Belief in consideration of evidence is reasoned, it is not faith. Atheism is simply a lack of theism (which is entirely unreasoned), it is not a statement of faith or lack of such a statement. Any belief can have both reasoned and faith components and can exclude one.
Humans have a reasoning mind and an instinctive mind, with both able to lear
Re: (Score:2)
Atheism means not believing in any gods.
That is exactly the redefinition I was talking about.
Do you not understand the difference between "not believing the is" and "believing there isn't"?
When you lump atheism and agnosticism together, you (as I noted) make coherent conversation on the subject impossible.
But perhaps that's your goal.
Re: (Score:2)
"Jordan Peterson's legacy as an academic polarized as a conservative commentator. "
Jordan Peterson's has no legacy as an academic because he is not one. JP is a grifter who exploits incels and sells to MAGA.
"Dawkins on religion is like your drunk uncle at Christmas complaining about Jesus never existing."
Not even remotely. Dawkins is well known for the clarity of his communications, precisely the opposite of your drunk uncle.
Re: (Score:2)
by applying equivocation, category error and/or false equivalence, optionally adding a dash of strawman or whatabout.
The Chinese Room argument is wrong (Score:2)
It applies equally to the human brain, with the structure of the brain being the "rule book" and the mechanical process being the laws of physics. All computation is mechanical at its core, it's when it starts to create surprising results that things get interesting.
Re: (Score:3)
All computation is mechanical at its core
What about studies that indicate the possibility of quantum effects within the brain?
Re: The Chinese Room argument is wrong (Score:4, Insightful)
Applying one thing we don't understand to explain another thing we don't understand is exceedingly poor practice.
Re: (Score:2)
C'mon man. That's what physicists have been doing for the last eighty years or so.
Re: (Score:2)
Quantum mechanics?
Re: (Score:3)
There are no such studies. It was all just wild speculation by people who didn't understand quantum mechanics. Even a single neuron is far too large for quantum effects to be significant, never mind a whole brain.
Re: (Score:2)
Discussing quantum effects on the brain activity is like discussing quantum effects in physics before having understood Bohr. First completely understand the simple model that explains 99.99% you'll ever need, then continue with the remaining parts when you're sure to got the simple model right.
I fucking hate The Chinese Room (Score:2)
I agree, it is always trotted out as 'proof' that computers can't have consciousness/understanding and that is always wrong.
It is a thought experiment, not proof of anything. As a thought experiment, it is in interesting starting point, but no more. The core of the basic form is handwaving by making the 'rulebook' some magical omniscient infinite thing, which it can't physically be.
Ask a Chinese Room the answer to this question: "How many fingers was I holding up ten seconds ago?"
The basic form of it is inc
Re: (Score:2)
Smart humans can do things that are not explainable by computations.
Re: (Score:3)
It really depends on *exactly* how you define "conscious". I don't believe that there's general agreement. The agreement is along the lines of "I know it when I see it", but different people are looking at different things...and some of the things are not observables.
FWIW, I believe that AIs are slightly conscious, but I believe the same thing about thermostats. They react it a circumstance in a manner designed to maintain homeostasis. To me that's one of the signs of consciousness. (Don't overread thi
Getting tired of saying this (Score:3)
Re: (Score:2)
So in this instance, you failed, because no one knows who the shit Rebecca Watson is, nor do they give a shit about whoever the fuck Rebecca Watson is, thinks
Re:Getting tired of saying this (Score:4, Informative)
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Re: (Score:2)
Who? Oh a random vlogger with a Youtube channel?
Re: (Score:2)
Re: (Score:2)
Watson and Dawkins have publicly disagreed previously [wikipedia.org] on an entirely different topic.
Conciousness isn't as mysterious as you thought (Score:2)
Re: (Score:2)
Dawkins is right. Detractors are just clinging, faith-like, to the idea that our brains are somehow magically more than computation devices
It's not that. LLMs reproduce an output of consciousness, but they way they do so isn't fundamentally any different than a tape recorder or even a book. It's a deterministic process that we can fully reproduce by doing calculations on a piece of paper.
It's not that there's some "magic" in our brains, but there's obviously a very complex process at work that we don't understand. It's also true that the "neural networks" used to run LLMs have only the most superficial similarity to actual brains. Just because
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Nope. LLMs are fully deterministic and anything they do is reducible. Hence there cannot be any consciousness in there that has any visible effect. QED.
On the other hand, most humans are gullible fools and are willing to believe a lot of crap.
Re: (Score:2)
Nope. LLMs are fully deterministic and anything they do is reducible. Hence there cannot be any consciousness in there that has any visible effect. QED.
I think there is a big error in assuming that consciousness *requires* non-deterministic behavior. We just don't currently know all the actions / reactions in the brain that decide our actions.
Does an insect have as much "consciousness" as a human?
Re: (Score:2)
How can consciousness have any meaning whatsoever if behaviour is deterministic?
As for insects, I would say if we can simulate their brain's neural network on a computer and the behaviour remains the same then they're clearly not conscious.
Re: (Score:3)
Wrong. The deterministic behavior means there is no consciousness in there with any effect at all.
What is the objective basis for the assertion consciousness requires nondeterminism?
Consciousness with no effect cannot be detected.
Can consciousness be detected? Is there an objective test for its presence of absence?
Re: (Score:3)
And what a brain does is not deterministic? A brain at a given state (including all neurotransmitters, hormones, etc.) will always do the same in the next second, just like an artificial neural network. If you see anything non-deterministic, then you just missed some variable when describing the input state.
Re: (Score:2)
The brain is an analog computer. It's literally impossible to know the entire system state or how it will change in the next second.
An LLM is a digital computer. You can store the precise state and precisely determine how it will behave for aeons to come.
> If you see anything non-deterministic, then you just missed some variable when describing the input state.
It's epicycles, epicycles, epicycles all the way down.
Re: (Score:2)
Just because you cannot know the state (like in inexact measurements) it does not mean the state is non-deterministic.
And if you look at the model of neurons we currently use, it's about a threshold, for which the infinitesimal arguments don't matter that much.
> An LLM is a digital computer.
A LLM is no computer. A LLM is a set of weights that can be used in computations done on a computer.
Re:Conciousness isn't as mysterious as you thought (Score:4, Insightful)
Dawkins is right. Detractors are just clinging, faith-like, to the idea that our brains are somehow magically more than computation devices
That's not how it works. Even if human-like consciousness could be replicate by a machine, there is no evidence that LLMs are doing that.
What he is saying is that it "looks enough like actual consciousness that it must be it", but that is not sound reasoning.
Something can be functionally equivalent enough to the real thing to give the impression of being the real thing without actually being the real thing.
Re: (Score:2)
i can see where he comes from but he's jumping into the tar pit here, flat. i would have appreciated a thoughtful exploration of what "consciousness" really is, how we perceive it and what it means (and that faith-like clinging to magic specialness), but (from what i'm able to read) he's mostly babbling nonsense about how impressed he is with "claudia".
i have little doubt that "artificial" conscience can (and probably will) be generally accepted as a thing eventually, it's a matter of complexity, but this i
Why is this even here? (Score:2, Insightful)
So a very old man believes crazy nonsense. Why would anyone care?
Define "conscious" (Score:5, Informative)
Re:Define "conscious" (Score:4, Informative)
Oddly Dawkins, who you think would have known better, actually implies he thinks the Turing test is a test of consciousness.
and later:
(Nowhere does he claim critics of LLMs claimed to accept the Turing test as a "definition of a conscious being" at any point in the past.)
Turing literally made it clear that he was avoiding the question of consciousness in the Turing test, choosing instead to determine if it's exhibiting "intelligent behavior".
I know he's popular in some circles, and have odd memories of my computer studies teacher back when I was young (he's been around a long time) promoting his work on memes (no, not those memes!) as a way to explain evolution. It's become clearthough that with a lot of subjects, he doesn't know what he's talking about, but waffles about them anyway. An inability to understand the Turing test and the difference between logic that's similar, if far more complicated and with far more data, to that of an autocomplete text entry system in a phone, and consciousness, was not on my radar.
Re: (Score:3)
The problem is that we can't define consciousness. No one can agree on what it means, or whether it means anything at all
Scientific American had a good article [scientificamerican.com] about this a few months ago:
But underneath it all lurk countless unknowns. "There's still disagreement about how to define [consciousness], whether it exists or not, whether a science of consciousness is really possible or not, whether we'll be able to say anything about consciousness in unusual situations like [artificial intelligence]," Seth says.
[...]
Artificial intelligence may soon force our hand. In 2022, when a Google engineer publicly claimed the AI model called LaMDA he had been developing appeared to be conscious, Google countered that there was "no evidence that LaMDA was sentient (and lots of evidence against it)." This struck Chalmers as odd: What evidence could the company have been talking about? "No one can say for sure they've demonstrated these systems are not conscious," he says. "We don't have that kind of proof."
Re: (Score:2)
That underlines the point he shouldn't be calling LLMs "conscious" rather than undermines it. Maybe if someone explained to him that it's roughly the equivalent of saying that LLMs have a soul he might get it.
Or maybe he'd miss the point entirely. My guess is the latter. He'd probably start complaining he's an atheist without understanding that's exactly why we picked that example.
You know, I'm not convinced all humans are conscious. I think some of us are. But I've started to feel the lack of self awarenes
Re: (Score:2)
Obviously. The Turing test is not really a sound test either. It is more for entertainment.
Read this as contagious (Score:3)
What a load of... (Score:2)
It's too bad, because Dawkins has written some interesting things, and hey, being the inventor of the word "meme" and memetics is a pretty big deal.
His reaction here is just astoundingly ignorant. Reading the dialog where he makes a Trump joke and the LLM responds (predictably) sycophanticly is, to use the modern parlance, just cringe. I would have hoped for a more informed take.
Re: (Score:2)
Indeed. It may also well be that at 85 he is going into dementia and has not realized that yet. Anyways, LLMs are fully deterministic. There is nothing in there that is not pure computation. If they had consciousness (some theories would allow that), it would have absolutely no effect.
Pink elephants (Score:3)
Just because you see pink elephants when you drink doesn't mean that they exist.
Ego Stroking Regurgitation Machine Flatters Author (Score:4, Insightful)
News at 11:00!
fortunately that's not what "conscious" means (Score:5, Insightful)
The evolutionary biologist said he had the "overwhelming feeling" of talking to a human during conversations with Claude, and said it was hard not to treat the program as "a genuine friend."
The scam victim said he had the "overhelming feeling" of talking to a higher power during conversations with the fortune teller, and said it was hard not to hand over bank account numbers to "a genuine friend."
Re: (Score:2)
A Harvard professor went to prison for scamming his family and friends out of $600,000 to send to a Nigerian scammer. From this prison cell, he insisted it was a legitimate deal that would have worked if the government hadn't interfered.
Once a delusion takes hold, there's very little chance of breaking it.
(And Dawkins has been delusional for a long, long, long time.)
I'll bet (Score:2)
this is a marketing stunt.
anthropomorphizing (Score:3)
Richard Dawkins has said chatbots should be considered conscious (source paywalled; alternative source) after spending two days interacting with the Claude AI engine.
I can't believe someone like Dawkins would fall for anthropomorphizing AI chatbots... unless he's using a different definition of consciousness, which is fair.
So, we have to start there: what does "being conscious" mean, for this scenario, and for Dawkins while evaluating this scenario?
The evolutionary biologist said he had the "overwhelming feeling" of talking to a human during conversations with Claude, and said it was hard not to treat the program as "a genuine friend.
Seems like a rather subjective and emotionally charged perspective. Nothing wrong with that so long as we recognize (and he recognizes) it for what it is.
With that said, this is a conversation worth having... within certain parameters (tbd)
Re: (Score:2)
Agree with you completely. To me, the real conversation here is probably about whether or not AI has gotten far enough to do a viable simulation of consciousness. ... but not sure that's what he's said?
I would be a little disturbed if Dawkins concluded Claude AI is truly "alive" from a few days of interacting with it
At what point could an AI be treated like a "friend" despite it just being computer software? And by treating an AI as conscious, perhaps it's only a suggestion that interactions with it stay p
Re: (Score:2)
I don't think he said he thought it was conscious but people are taking it that way because it's, ironically, also the most emotionally charged way to interpret this story
Man of Science (Score:2)
For a man of science, that's a remarkably dumb thing to say. He should likely know that just because it "feels" alive, doesn't mean it's so.
Re: (Score:2)
He is 85. My guess is he has dementia and has not yet realized that. The statements he made here are pretty dumb.
Flattery will get you everywhere (Score:2)
>> Prof Dawkins said he had let Claude read a draft of the novel he was writing and was astounded by its insights. "He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate: 'You may not know you are conscious, but you bloody well are!'" Prof Dawkins said.
Translation: The bot told him that it loved his book (as the overly-agreeable bots are programmed to) and the noted egotist declared
Re: (Score:2)
Yup I bet you the system prompt for the AI instructs it to stroke the ego of users. Probably an an attempt to get them to pay for subscriptions to AI-related services.
We have laws against gambling because it exploits human behavior for profit. I won't be surprised if we see laws banning or restricting this sort of behavior in AIs for the same reason, eventually.
Octogenarian Doesn't Understand AI (Score:2)
What a shocker that he doesn't understand AI.
It's sad to see old Richie become a doddering old fool. I guess we're all headed that way. Some of us will be lucky enough to get there too.
Re: (Score:2)
Indeed. In particular, what he does not understand is that LLMs are fully deterministic. That means any consciousness in there has absolutely no effect and hence would be impossible to detect from observation.
The more general observation I have is that apparently most people have no clue about the complexities involved in an LLM and its training data set. As a CS PhD, I have to say that if the mechanisms used do not allow something, it does not matter how convincingly you fake it. It will still not be in th
Re: (Score:2)
Indeed. In particular, what he does not understand is that LLMs are fully deterministic.
What difference does it make whether a system is deterministic or not? What does this have to do with its capabilities?
That means any consciousness in there has absolutely no effect and hence would be impossible to detect from observation.
This is one of the most craziest of non-sequiturs I've heard all week.
the Turing test already passed (Score:2)
If talking to a robot that seems human is the measure of consciousness then computers we had 30 years ago were conscious.
Anthropic actually hires philosophers, scientists, etc who are experts on consciousness, and even THEY don't know if it's conscious. It's a stupid idea anyway. It's like trying to measure when you're dead; there is no one indicator of it.
Re: (Score:2)
Anthropic actually hires philosophers, scientists, etc who are experts on consciousness, and even THEY don't know if it's conscious.
They don't? They must be hiring from the very bottom then. Because it is completely clear that LLMs are not conscious, unless that consciousness has no effects.
Re: (Score:2)
They don't? They must be hiring from the very bottom then. Because it is completely clear that LLMs are not conscious, unless that consciousness has no effects.
Are there capabilities something that is conscious has that something that isn't doesn't? If so care to enumerate them?
AI has jumped the shark! (Score:2)
Ladies and Gentlemen, LLMs have officially jumped the shark!
What a fool (Score:2)
Wow, it has been a lot of years since I have bothered to login to my account here, but I absolutely had to to respond to this article.
Richard Dawkins is a complete fool. Many years ago, I thought he was really smart, and insightful, but as the last 15 years or so have gone on, he is just plainly dumber and dumber... is he getting dumber, or am I getting smarter?
I hope I don't get dumber as I get up to his age.
Re: (Score:2)
... wow, i just edited my profile.. last time i did i was 33... now i'm 50... amazing.
Re: (Score:2)
He is 85 years old. My guess is dementia.
Dawkins has a rather consistent point of view (Score:2)
Man, I remember when Selfish Gene made its way into my hands, in the late 70's. A real "Chapman's Homer" moment for me. Led me later into a thesis on genetic algorithms. But along with that comes ... a rather mechanistic point of view, consistent with his later writings on religion.
While I'm not on board with Claude being in a class with humans, or cats for that matter, I think critics here might be missing a point, not about how Dawkins views LLMs so much as how he views humans. P-zombies is likely an over
Meme (Score:2)
Dawkins: Claude, say, "I'm alive!"
Claude: I'm alive!
Dawkins: Oh my GOD!
we are missing an important baseline! (Score:2)
So _____ can be: human, chimpanzee, dog, dolphin, mouse, crow, sparrow, spider, ant, fruit fly
Bonus question if you do go as low as fruit fly, is this uploaded fruit fly brain conscious: https://futurism.com/science-e... [futurism.com]
Consciousness is a crappy concept (Score:2)
The typical definition goes something like this:
Think about a thermostat, it's awake, aware of it surrounding temperature, it "feels" that the it is too hot, which is unsettling, and causes it to signal the AC motor to turn on and suddenly feels ok, no more tension.
Consciousness is either: supernatural an ill defined or describes a simple feedback loop with some internal stat
Just fits with the crazy times (Score:2)
Leading proponents of equality think DEI discrimination is fine.
Leading proponents of women's equality think people with a penis can be women.
Leading proponents of support for refugees think actual Nazi jew-hate is fine.
And now a leading proponent of the fact that there is no god thinks AI is conscious.
It is alive... (Score:2)
A Sufficiently Good Illusion (Score:2)
Re: (Score:2)
Become? There was a time when he wasn't?
Re: (Score:3)
I do believe cats are conscious. I do not believe LLMs are.
I believe cats are pretty similar to us, complex and we have no idea how we and cats work.
LLMs are relatively simple and we know well how they work.