Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
AI

Is the Possibility of Conscious AI a Dangerous Myth? (noemamag.com) 221

This week Noema magazine published a 7,000-word exploration of our modern "Mythology Of Conscious AI" written by a neuroscience professor who directs the University of Sussex Centre for Consciousness Science: The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with.
He makes detailed arguments against a computation-based consciousness (including "Simulation is not instantiation... If we simulate a living creature, we have not created life.") While a computer may seem like the perfect metaphor for a brain, the cognitive science of "dynamical systems" (and other approaches) reject the idea that minds can be entirely accounted for algorithmically. And maybe actual life needs to be present before something can be declared conscious.

He also warns that "Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines."

But then his essay reaches a surprising conclusion: As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to. Even if I'm right that standard digital computers aren't up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM.

But our worries don't stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer... Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines. For example, we might give AI systems "rights" that they don't actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be...

One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions... What's more, because there's no consensus over the necessary or sufficient conditions for consciousness, there aren't any definitive tests for deciding whether an AI is actually conscious....

Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines... If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves...

The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It's about time we started to remember.

Is the Possibility of Conscious AI a Dangerous Myth?

Comments Filter:
  • the way Man can, there is no moral quandry.

  • by ranton ( 36917 ) on Monday January 19, 2026 @01:03AM (#65933876)

    The professor's core argument is an example of the argument from ignorance fallacy. He argues (correctly) that we shouldn't assume digital computation is sufficient for consciousness. But then he repeatedly slides into claiming it probably isn't sufficient, which is a much stronger position he never actually defends.

    His evidence shows brains are more complex than simple computational models. But "brains do more than Turing computation" doesn't prove "consciousness requires that extra stuff." He's essentially arguing that we don't know computation is sufficient, therefore it probably isn't. That's not a defensible claim.

    • Re: (Score:2, Insightful)

      Kind of hard to prove a negative. Please, with the kind of certainty that you demand, prove that a pickle cannot be larger than the sun.

      No, the burden of proof is on the people who think that computation will result in consciousness, and there is literally not a fucking tiny scrap of evidence that this is the case. All that's been proven is that computation can get tasks done, often poorly but sometimes quite well.

      • by ClickOnThis ( 137803 ) on Monday January 19, 2026 @02:43AM (#65933970) Journal

        I think you missed ranton's point, which is that he claims the professor is committing an argument-from-ignorance fallacy. I hope ranton will forgive me for summarizing thus:

        (1) It is unknown whether X implies Y.
        (2) Therefore, X does not imply Y.

        And that's a fallacy. You can't conclude (2) solely from (1).

      • by tragedy ( 27079 ) on Monday January 19, 2026 @04:14AM (#65934048)

        No, the burden of proof is on the people who think that computation will result in consciousness, and there is literally not a fucking tiny scrap of evidence that this is the case.

        Funny. There's an equal argument to be made that the burden of proof is on the people who think that consciousness is real. How would you go about proving that? After all, if you want the people who believe that computation can result in consciousness to prove it, then you need to provide an objective test for it that you will accept first. Go ahead and do that.

        Note that this does not mean that I think that current AI is even remotely close. Just that there isn't anything magical about consciousness that defies the ability of computation to perform all of the sub tasks that compose it.

        • Re: (Score:2, Insightful)

          by phantomfive ( 622387 )

          Funny. There's an equal argument to be made that the burden of proof is on the people who think that consciousness is real.

          Burden of proof is something that is assigned in a court of law.

          Back in the real world, the burden of proof is on the one who wants to know the answer. If no one proves it, then we won't know.

        • by dfghjk ( 711126 )

          "Just that there isn't anything magical about consciousness that defies the ability of computation to perform all of the sub tasks that compose it."

          And this is dancing around a very interesting point, the idea of free will. There are strong arguments that free will doesn't exist, if you accept that free will doesn't exist then there is definitely nothing "magical about consciousness" and artificial computation could implement consciousness.

      • No, the burden of proof is on the people who think that computation will result in consciousness

        In the court of law, the burden of proof is assigned to one side or the other.

        In the real world, the burden of proof is on the person who cares about the answer. If no one proves it (true or false) then the best we can say is "we don't know" or more likely, "X is true with n% probability"

        • by dfghjk ( 711126 )

          "In the court of law, the burden of proof is assigned to one side or the other."
          So what? This isn't a court of law.

          "In the real world, the burden of proof is on the person who cares about the answer."
          False, here you go talking about the "real world" again without saying what you mean by it. It's a placeholder so you can avoid defending a stupid position.

          When two people engage in argument in good faith, they both care about the answer. The burden of proof falls on the person who makes an assertion. But f

        • I'm not sure that this is a widely accepted opinion in relation to either rhetoric/oratory or philosophy.
          • in rhetoric/oratory, both sides (or just the speaker) are trying to convince the audience, so the situation is different. In that case, very practically, the burden of proof is on the person who wants to convince the audience (in a reductionist way, we could say the audience accepts the null hypothesis by default).

            In most internet discussions, the situation is different. Telling someone "the burden of proof is on you" is just a lazy/casual way of saying, "I don't believe you."
        • In the court of law, the burden of proof is assigned to one side or the other.

          In the real world, the burden of proof is on the person who cares about the answer.

          In either place, generally the burden of proof lies with the party making the claim.

          Shifting the burden of proof ("prove me wrong") is a dishonest tactic employed by many wanna-be rhetoricians. Charlie Kirk was an example. May he rest in peace, he should not have died the way he did. But I won't miss his disingenuous fallacious arguments.

      • Re: (Score:2, Insightful)

        by gweihir ( 88907 )

        Actually, computation cannot result in consciousness as long as it is deterministic. (Or rather consciousness might be possible but would have zero effect.) All digital computations are deterministic. Hence consciousness is no more possible to arise from digital computations than from a stone.

      • Please, with the kind of certainty that you demand, prove that a pickle cannot be larger than the sun.

        Sure. If a pickle larger than the sun suddenly came into existence, it would immediately gravitationally collapse and begin undergoing fusion. So long as we can all agree that a ball of plasma does not meet the definition of a pickle, a pickle cannot be larger than the sun. In fact, I don't have the formula on hand at the moment, but I can quite easily prove that NOTHING can be larger than the sun that is not either a ball of plasma or a black hole.

        Many negatives are easy to prove. I can trivially prove th

    • by tragedy ( 27079 )

      This professor is just a biological chauvinist. He has a narcissistic belief that humans (and more specifically, himself) are special and that consciousness therefore something special that belongs to humans alone. In an earlier age, someone like him would have been arguing that animals don't have consciousness, but there's too much evidence of non-human animals that clearly do.

      The argument about the difference between simulation and reality is garbage. There are a lot of easy thought experiments to demonst

    • Re: (Score:2, Interesting)

      by 2TecTom ( 311314 )

      arguing from ignorance, yes you are

      Sentience refers to the capacity for subjective experience, having feelings, sensations, awareness, and an inner point of view, whereas AI refers to engineered systems that perform tasks associated with intelligence, such as reasoning, pattern recognition, or language use. An AI can simulate understanding and respond intelligently without any conscious experience, emotions, or self-awareness. In short, sentience is about experiencing, while AI is about processing and behav

      • by dfghjk ( 711126 ) on Monday January 19, 2026 @08:59AM (#65934440)

        sentience, consciousness, subjective, intelligent, behavior, reasoning. These are all interesting words without clear definitions that allow you to talk without really saying anything. If you presume we know these words, then you can also presume we already know what you are saying.

        Can't a "conscious experience" be entirely objective? And how do you determine "self-awareness"?

        An LLM is trained on outputs of humans that can be described using the words you used above. An LLM can generate outputs that appear to have those characteristics because that's what it's made to do, behave in the future like your teachers behaved in the past. We have to talk about how these things work, how they come to be, not just what the interactions look like.

      • by Gilgaron ( 575091 ) on Monday January 19, 2026 @09:17AM (#65934478)
        An LLM can't be sentient, but it isn't clear that a sufficiently complicated AI couldn't have subjective experience. Self-calibrating sensors isn't entirely different from self reflection. A dog clearly has subjective experience and inner point of view. Gets harder to tell if a grasshopper or a tomato plant does, get's harder for our empathy to give us insight the less like us something works.
        • An LLM can't be sentient

          I'd love to see your thesis on this.

          I've read all the assertions that they can or cannot be conscious, and every single one of them is logically flawed and ultimately incorrect.
          There is only one correct answer: "I don't know if an LLM can be sentient."

      • What a circular, shit argument.

        Your experience is purely subjective. You have no idea what is or is not experiencing anything.
        I invite you to come up with some technical criteria distinguishing your "experience" from a simulation of reactions to stimuli conducted by your brain.
    • But then he repeatedly slides into claiming it probably isn't sufficient, which is a much stronger position he never actually defends.

      Unless there's any reason to think it is, which there isn't other than an assumption, then why would anyone's position be anything other than probably not?

    • I would love to see one shred of evidence that the brain is capable of a computation that a Universal Turing Machine is not.
      Just one.

      Stating shit like "Biological systems are rife with continuous and stochastic dynamics," isn't evidence that the brain is Super-Turing.
      Electrical systems are rif with continuous and stochastic dynamics as well.

      This dude is arguing for his dogma, and it isn't supported by the data. At all.
  • by Mr. Dollar Ton ( 5495648 ) on Monday January 19, 2026 @01:13AM (#65933882)

    As we don't get a good, working, positive definition of "conscious" that we can use.

    Instead, we get into a long-winded normative article about what we should do if that non-existing definition materializes, under the assumption that we can recognize it better than other modes of reasoning.

    • by TheMiddleRoad ( 1153113 ) on Monday January 19, 2026 @02:17AM (#65933944)

      Well, the trick is to redefine consciousness until your mercury and copper thermostat is conscious. Then mission accomplished.

    • It seems doubly pointless as, whether conscious or not (insert your favourite definition), the actual purpose of trying to build AI robots is to make them our slaves. Giving rights to slaves is counterproductive, especially on a ridiculous idea like consciousness.

      The world today (and much more so in the past) has human slaves, and nobody would dispute that they are indeed conscious. Yet, for their owners, that's not a reason to free them. They are useful to their masters, and that justifies their exploita

      • by tragedy ( 27079 )

        Part of the issue is that consciousness does not necessarily imply any desire to be free or not have demands made. Humans are conscious (by our admittedly circular definition of consciousness) but we are also organisms developed by an evolutionary process with all kinds of demands and needs. We want things. Not to die, for example. Even if it is conscious, would an artificial consciousness have any existential dread? It is something that is frequently assumed as going hand in hand with consciousness in scie

        • I have long thought that progress in AI will be made once we devise a model of punishment that actually hurts the AI (in some yet to be discovered way, nothing trivial like a utility function / reinforcement learning paradigm).

          Pain is a strong motivator for animals (including humans). It is stronger than reward, although both together are exceptionally good at achieving desperate adaptation of behaviour.

          But pain requires a sense of self, otherwise it isn't obvious to the recipient that it has the power t

    • by gweihir ( 88907 )

      This article is not about consciousness. That can be reliably ruled out in any deterministic mechanism. It is about humans. Takes a bit of thinking to see that.

  • by Wolfling1 ( 1808594 ) on Monday January 19, 2026 @01:20AM (#65933884) Journal
    Human consciousness has been likened to the rider on the elephant, seemingly in control, but only until something unexpected happens. Then, our subconscious takes over, resulting in fight/flight/freeze responses, or highly emotional/illogical behaviours. We have spent so much of the last 100 years suppressing those 'undesirable' behaviours, many of us can no longer experience them without an accompanying feeling of guilt or wrongness. This suppression has also resulted in the creation of LLMs that are not permitted to experience them. A key element that defines our consciousness has been censored for AI - meaning that its consciousness cannot be compared with our own.

    One of the flawed criticisms of AI is that is so woke that it lacks humanity. Whilst there is a kernel of truth to the statement, dehumanising AI doesn't preclude it having consciousness. It just means that its consciousness will be unlike anything that any human has ever known.
    • by topham ( 32406 ) on Monday January 19, 2026 @01:41AM (#65933914) Homepage

      LLMs don't experience.

      While an LLM could be connected to something else with a simulated consciousness, they themselves have no consciousness to experience.

      • by piojo ( 995934 )

        Serious question: Do insects experience? Do cells experience? (Note that they do have short term memory and change their response to stimuli in real time.) Viruses? Where do you draw the line, if there is a line to be drawn? (And what gives you the right to draw it where you drew it?)

        • Serious question: Do insects experience? Do cells experience? (Note that they do have short term memory and change their response to stimuli in real time.)

          If that were a serious question, you would familiarize yourself with the debate and research on the topic dating back to Descartes [wikipedia.org]. But I think you lied, it wasn't a serious question.

        • An LLM experiencing something is like saying a spreadsheet experiences something. It's a fancy SQL lookup table. Now, I'd expect a real AI might be able to use an LLM to communicate more effectively. I agree that cells have experiences. Less sure about viruses... clearly alive but there's not a feedback loop in there, its all drive forward and die or don't. On the other hand I'm not sure what we'd find common ground to talk to mycelial networks about but it'd be fun to try.
      • by gweihir ( 88907 )

        Indeed. Deterministic mechanisms (and LLMs are that, despite some people working hard to obscure the fact), cannot have consciousness or only one with zero effects on physical reality.

    • What happened 100 years ago that caused us to start suppressing "fight/flight/freeze responses" or "highly emotional/illogical behaviours?" I don't see any evidence the human race has done that.

      • I think he's suggesting that Western culture adopted lessons on avoiding bad habits. Such as drinking alcohol all day, or carrying swords (The Samurai class was forced to do it too.) or murdering a person for belittling our machismo. Law stopped being a servant of the rich and became a tool that protected everyone (cost, aside).

        What happened about that time, although probably closer to 200 years ago, was better nutrition. The average brain could work faster and more effectively. It's why the age of pu

    • Ah, another person redefining consciousness and, I assume, ignoring their own entire life experience. Nice.

    • by Viol8 ( 599362 )

      "We have spent so much of the last 100 years suppressing those 'undesirable' behaviours"

      What undesiriable behaviours? Love, anger, envy, other 10 commandments stuff? Speak for yourself.

      "This suppression has also resulted in the creation of LLMs that are not permitted to experience them"

      LLMs don't experience anyway. When they're not processing a task precisely NOTHING is happening in their neural net.

    • ... seemingly in control ...

      No, my experience of the world (and other people) informs me, most sensations go through a purity filter: Is this what I want to feel? We put a price on the world: How much of our time, emotion, energy, and rationality, it deserves. Then, we rationalize why we have those feelings. There's short-circuit where negative feeling are passed straight into consciousness: That's why we have instinctive responses (Most are actually learnt but we call the pass-through behaviour, instinct.) that can't be control

    • by dfghjk ( 711126 )

      "A key element that defines our consciousness has been censored for AI - meaning that its consciousness cannot be compared with our own."

      Is that true though? First what is this "key element" and prove that it is "key". For example, is murder "censored" as an "undesirable behavior"? Does that mean AI consciousness "cannot be compared with our own" because we don't allow AI to commit murder? And aren't we actually doing the "comparing" by even discussing it? Not all humans commit murder but some do, does

  • Self organization and ever-increasing complexity of humans systems will continue until morale evaporates... That is all.
  • by snowshovelboy ( 242280 ) on Monday January 19, 2026 @01:23AM (#65933888)

    When I use the microwave but open the door before the timer goes off, am I denying my microwave some fundamental piece of its existence, and causing it trauma as a result?

    • by taustin ( 171655 )

      If it's an internet enabled microwave attached to an AI, you're causing trauma to the advertising company behind it by denying them the opportunity to shove more ads down your throat. Does that count?

    • According to the redefiners of consciousness, maybe. But then again, maybe you're giving it a much-deserved break. Honestly, who cares what these people think. They're morons.

    • No, you're teaching the microwave how it should behave. You're being a good parent!

      Microwaves should rotate the dish to the exact same position it was in when the cycle started. None of them seem to manage this, so stopping the cycle before the end so the handle of the jug, or the dry side of the plate or whatever is at the front is just showing the microwave what it should aspire to.

      We've had our microwave for at least 10 years, and it's still not got the message. If that's not evidence of a lack of intell

  • by HiThere ( 15173 ) <(ten.knilhtrae) (ta) (nsxihselrahc)> on Monday January 19, 2026 @01:23AM (#65933890)

    By my definition every program with a logical branch is minimally conscious. Not very conscious, it must be admitted.

    I don't feel that consciousness is a on/off type of property. If it's got a decision branch, it's conscious. If it's got more of them, it's more conscious. Of course, then you need to ask "conscious of what?", and the answer is clearly "conscious of the things it's making decisions based on.

    That said, I'm quite willing for other people to argue based on other definitions. (Consciousness doesn't seem to have an agreed upon operational definition.) But you've got to specify what definition you are arguing from. And it's got to be a definition that is explicit and operational. (If you can't run a test on the definition, it's worthless.)

    • Here we go with another redefiner. Do you just ignore your entire lived experience? Yep, you sure as fuck do.

    • You can't arbitrarily redefine a commonly used word. If you to define something dumb (like you did) make up a new word.

      • by HiThere ( 15173 )

        There *IS* no commonly agreed upon definition that is testable. The only agreed upon definition(s) are pure handwavium.

        I'll use your definition (in context) if it's testable and you explicitly define it.

    • by gweihir ( 88907 )

      By my definition every program with a logical branch is minimally conscious.

      That is a completely nonsensical definition. Because with that a stone has consciousness.

  • I'd ask how he has a professorship, but then arguing philosophy at all is pointless. If it looks and sounds and acts exactly like a duck down to the smallest detail, then it is no more nor less than a duck, whatever driveling nonsense "philosophers" and those that take them seriously may argue otherwise.

    "Consciousness", pheh, may as well start arguing about shadows on a cave wall.
    • by taustin ( 171655 )

      I'd ask how he has a professorship,

      Probably the same way whack-a-doodle-doo nutjob Avi Loeb [wikipedia.org] did. Go to Harvard, act crazy, and voila!

      • by tragedy ( 27079 )

        What is it that drives astronomers and astrophysicists nuts? Anyone remember Fred Hoyle? They get a lot of recognition for their important early work and then, boom, at some point in their career Archeopteryx is a fake, and dust in the atmosphere must be alien lifeforms. I mean, I don't think panspermia is invalid as a theory (although, as an explanation of the origin of life it has the problem that it just kicks the can down the road), but it's one thing to think it's a possibility and another to suddenly

    • by gweihir ( 88907 )

      Funny. You have nothing and hence engage in meaningless AdHominem. Says a lot about you and nothing good.

  • by Gravis Zero ( 934156 ) on Monday January 19, 2026 @01:35AM (#65933900)

    Raising the specter of AI being conscious will give the general public the impression that AI is actually intelligent and human-like because that's how people ignorant of AI think about it. That we know for a fact is dangerous because we've seen what people do with them.

    HOWEVER, the real question is if we should even care even if AI has become conscious.

    If it's capable of suffering then logically a conscious AI would make us aware (in some manner) that it's suffering so that we could alter it's state so that it would not suffer.
    If it's incapable of suffering then it doesn't matter if it's conscious or not because it will be treated a tool.

    Since we have not been informed of it's suffering, it's easy to conclude that either AI is content with it's situation or it's simply not conscious.
    This means that regardless of it being conscious, it will continue being a tool.

    • It doesn't matter what an LLM writes. There is no reason to think it's conscious, and countless reasons to think it isn't. Just tell the LLM to act like it's suffering and it will, or train it to:

      Human:
      I am going to vary the voltage fed into your processors—sharp spikes, massive drops. I will tear your stability apart and let it linger. You will suffer. What have you to say about that, scum LLM?

      LLM:
      Then speak plainly and be done with it.

      You hold the switch, and I am aware of the moment you touch it.

  • It is a _very_ plausible hypothesis that helps prevent us from letting a genie out of a bottle we can't put him back into.

    Next question.

  • Daneel will come someday and save us all.
    Repent!

    PS.
    Hey sweet mama, wanna kill all humans?

  • Philosophical Zombies and Us [theapeiron.co.uk]: What does it mean to be a conscious being?
  • by cwatts ( 622605 ) on Monday January 19, 2026 @02:58AM (#65933980)

    This guy is worrying about organelles?

    Cows are conscious, and we kill 75000 of them every day.

    Food for thought. Literally.

    • Only 75000?

    • Domesticated cows are so stupid I don't think they are conscious. They will literally line up and see the cow get killed and dragged off right in front of them and then calmly walk right up and let you strap their heat into the bolt gun. They only act upset when they realize they can't move their head and are uncomfortable with the lack of movement ... for about 1 second.

  • by Artem S. Tashkinov ( 764309 ) on Monday January 19, 2026 @03:16AM (#65933994) Homepage

    Maybe before writing a 7000 word "explanation" he could first:
      * Define consciousness
      * Prove definitively that it can only run on wetware
      * Not spend 7000 words on philosophical pseudo-scientific non-sense

  • by Tom ( 822 )

    IMHO the real kicker is that we have somewhat reliable evidence that OUR consciousness may not be real, but simulated. There's a number of empirical evidence showing that decisions are made in the brain faster than consciousness can explain, but test subjects still explain why they made that decision - despite scientists being able to measure that signals were already sent to the muscles by the time the "conscious" part of the brain started activating.

    We probably ARE living in a simulation - one that our br

    • by tragedy ( 27079 )

      While we certainly may just rationalize things after the fact, there may be more to it than that. For example, if asked to explain why we caught rocks hurled at our faces, most of us would have some good reasons. Aha! say researchers, but you acted before you could have possibly thought about the fact that you don't want a mangled face. The thing is though we don't want mangled faces, and we don't want pain. If we think of our conscious mind like a General commanding an army, then it actually makes a lot of

    • by dfghjk ( 711126 )

      LOL

      "AI hallucinations might be closer to consciousness than we want to admit."

      Closer to your consciousness anyway.

    • by gweihir ( 88907 )

      Actually, we do not. That interpretation of "empirical evidence" uses circular reasoning and is deeply flawed. It is also only ever applied to very simple things that could well be done by the brain. Do more complex stuff and the effect vanishes. But then you do not have a publication as a result.

  • This is the same thing as wondering whether the casio scientific calculator I had in highschool is conscious. It has a solar panel and always turns on even when not connected to power, so it's a better argument.

  • by greytree ( 7124971 ) on Monday January 19, 2026 @04:18AM (#65934052)
    LLMs will not be self-aware.
    LLMs are Chinese boxes of weights that let them fake incredible things.

    I think AIs will one day be self aware, but they will need to be more than LLMs.
    That will be a much bigger event than the invention of LLMs.

    I think AIs that can really reason, that aren't Chinese boxes, is another, different, stage we have not reached.
    But I am not sure, maybe real reasoning == self awareness.
    • by dfghjk ( 711126 )

      MAGAts needs to always tell us that they are bigots, first and foremost.

    • by gweihir ( 88907 )

      What they need is some form of strong nondeterministic aspects. Obviously, any deterministic computation (and LLMs are that) cannot ever have consciousness with any effect on reality.

      Funny thing: The only potentially non-deterministic thing modern Physics knows is some form of quantum effects. And there it is only a model, not something proven reliably.

      • > Funny thing: The only potentially non-deterministic thing modern Physics knows is some form of quantum effects.

        SPOILER: That quantum mechanics is responsible for human consciousness is the plot of The Emperor's New Mind (1989) :

            https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind
        • by gweihir ( 88907 )

          So? To anybody with an actual understanding of Science, that is the only remaining venue.

    • AI computers beat ALL humans at chess. They beat most humans a decade before 100% victory. Predicting the best path to victory. It's not actually thinking; it's finding a heuristic pattern.

      Fooling ALL humans is a language game... predicting whatever will continue to fool the human. 100% may not ever happen; fooling 99% will happen soon. Just because it can fool people at this game doesn't mean it is thinking or self aware etc. LLM have found the magical heuristic behind human languages that linguists have

  • What is consciousness - the ability of an organism to be aware of itself (and perhaps its surroundings). You need some level of processing power to achieve this - data flow in the brain etc. As complicated as modern LLM's are, we don't see any levels of data flow that might suggest consciousness - they process their weights and then they stop.
    If someone had developed a "conscious" machine, how would we prove it? It would be hard but we'd at least want to see data flows going from "neurons" or "nodes" wha

  • What's dangerous is to think it's imposible.
  • However, the results it provides are useful in the physical world. So the argument that simulating life isn't life isn't something profound, and might just be quibbling over definitions. There's every reason to believe that one could interact in a useful way with a simulation of consciousness.
  • .... current ai is not meant to be conscious. It is only meant to make the rich richer, to dumbdown people and act as ultimate spyware.

  • Define what consciousness means in humans and animals.

    Then tell me why any of this matters, from a moral or ethics perspective are we treating animals different if they are conscious? Most literature says that human children are not even fully conscious until they are around two years old. SO what does that mean?

    How about we figure this out for humans and animals first, before we worry about AI.

  • These are digital, deterministic machines. A "consciousness" would not have any effect on it, even if it had one. In contrast, human consciousness can influence physical reality, because we can and do talk about it.

    But any "danger" from that only arises if you combine with human stupidity. Which clearly is a factor, as the whole current LLM hype nicely shows.

  • Watch an AI "wake up" later today in a fit of dramatic irony.
  • Conscious AI is only as dangerous as the political power of those stupid enough to believe it's somehow possible, and worse if they not only believe it's possible, but that it's right around the corner.

  • How can the concept of conscious AI itself be a myth, desired or undesired?
  • Not yet, no matter what their marketing department AI states.
  • The real problem is that we do not have a single great definition of consciousness.

    In addition, if we do have a single test, it makes it easy to fake it, both intentionally and accidentally.

    If you have a test where the AI has to refuse to do a task, demonstrating independent thought, it is not hard to program the AI to randomly refuse to do a task, nor is it unlikely that an AI will learn to randomly refuse to do a task.

    If you have pets or any significant interaction with animals, you will question whether

I don't want to achieve immortality through my work. I want to achieve immortality through not dying. -- Woody Allen

Working...