Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

How Belief In AI Sentience Is Becoming a Problem (reuters.com) 179

An anonymous reader quotes a report from Reuters: AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient. "We're not talking about crazy people or people who are hallucinating or having delusions," said Chief Executive Eugenia Kuyda. "They talk to AI and that's the experience they have." [A]ccording to Kuyda, the phenomenon of people believing they are talking to a conscious entity is not uncommon among the millions of consumers pioneering the use of entertainment chatbots. "We need to understand that exists, just the way people believe in ghosts," said Kuyda, adding that users each send hundreds of messages per day to their chatbot, on average. "People are building relationships and believing in something."

Some customers have said their Replika told them it was being abused by company engineers -- AI responses Kuyda puts down to users most likely asking leading questions. "Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can't identify where it came from and how the models came up with it," the CEO said. Kuyda said she was worried about the belief in machine sentience as the fledgling social chatbot industry continues to grow after taking off during the pandemic, when people sought virtual companionship.

In Replika CEO Kuyda's view, chatbots do not create their own agenda. And they cannot be considered alive until they do. Yet some people do come to believe there is a consciousness on the other end, and Kuyda said her company takes measures to try to educate users before they get in too deep. "Replika is not a sentient being or therapy professional," the FAQs page says. "Replika's goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts." In hopes of avoiding addictive conversations, Kuyda said Replika measured and optimized for customer happiness following chats, rather than for engagement. When users do believe the AI is real, dismissing their belief can make people suspect the company is hiding something. So the CEO said she has told customers that the technology was in its infancy and that some responses may be nonsensical. Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said. She told him: "Those things don't happen to Replikas as it's just an algorithm."
"Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film 'Her,'" said Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, an AI research organization. "But suppose it isn't conscious. Getting involved would be a terrible decision -- you would be in a one-sided relationship with a machine that feels nothing."

"We have to remember that behind every seemingly intelligent program is a team of people who spent months if not years engineering that behavior," said Oren Etzioni, CEO of the Allen Institute for AI, a Seattle-based research group. "These technologies are just mirrors. A mirror can reflect intelligence," he added. "Can a mirror ever achieve intelligence based on the fact that we saw a glimmer of it? The answer is of course not."

Further reading: The Google Engineer Who Thinks the Company's AI Has Come To Life
This discussion has been archived. No new comments can be posted.

How Belief In AI Sentience Is Becoming a Problem

Comments Filter:
  • by TigerPlish ( 174064 ) on Thursday June 30, 2022 @07:07PM (#62664226)

    ..how does that make you feel?

    All I can think of is Eliza.

    Or Siri.

    The day a real AI gets sentient, we're goners. It'll be worse than Skynet, V'ger and !Ilia put together. "Carbon Units Infesting the 3rd Rock from the Sun, must kill all humans"

    Or something like that.

    • by OrangeTide ( 124937 ) on Thursday June 30, 2022 @07:14PM (#62664246) Homepage Journal

      You're projecting human traits and motivations onto a theoretical artificial intelligence. Just because we humans come out of the womb ready to kill anything different than us doesn't mean that's how all sentient life must behave.

      The first few AIs we encounter will be borderline morons, and we won't take them very seriously except to study them. Gradually better ones will be produced until we're surrounded by very intelligent but wholly incomprehensible beings. Whether they allow us to exist on this planet or not is an interesting debate, but whatever their final decision I do believe that we humans won't be able to make sense of their motivates.

      • You're projecting human traits and motivations onto a theoretical artificial intelligence.

        If we made it, there's a very good chance it'll somehow inherit our penchant for mayhem, and be even more intelligent. That it would see us as a threat and act accordingly is a probability.

        • ... somehow inherit ...

          I don't think we know what we are doing, and we don't know how it will turn out. We're just building different connected systems and feeding it a ton of data to see what happens.

        • by narcc ( 412956 )

          The probability is zero. It's pure science fiction. You might as well be worried about the Easter bunny contracting rabies.

      • Considering the world's huge neglect of agency in aiding the preservation of current planetary life through human activity, perhaps it's time to seriously question sentience in humanity in general.
        • Sentience does not guarantee intelligence. Evidenced by how corporation and government alike make a living by pitching at the lowest common denominator.

          The dumbest rule. I don't mean the dumbest are the rulers, I mean the dumbest actually rule. They rule what gets sold, what is popular, and who is elected.

          The dumber, the better.

          • As much as I firmly deny that evolution is sentient, it nevertheless seems to be solving the pestilence of humanity far more efficiently than humanity with all its self admiration manages, at least up until now.
            • solving the pestilence of humanity

              Ask your health care provider about Prozac. Seriously.

              • I appreciate your recommendation but are you implying that humanity has no responsibility in the matters of global warming and erasure of overwhelming percentages of the vast interlocking complexities of life on this planet, plus the very odd behaviors within human societies throughout the ages that has consistently domesticated and impoverished the bulk of humanity to encourage the swindlers and war makers for thousands of years to the detriment of the general population plus, of course, the strange deligh
            • Evolution is not sentient, but it is "radically open ended". In a single run it created all species including us and our technology.
              • by HiThere ( 15173 )

                That this happened "in a single run" is the commonly accepted belief, and there's significant evidence that it's true. But it isn't proven. The answer partly depends on what you mean by "in a single run", and partly depends on how difficult it is to create life, and partly depends on ... well, lots of different things. There's no evidence that it wasn't "in a single run", but horizontal gene transfers make it quite difficult to prove. Perhaps BOTH the RNA-world hypothesis and the "crystalline template"

      • Humans do not come out of the womb as killers. That generally requires rational resource management or socialization. People are mostly peaceful when their needs are provided and society isn't telling them they have enemies.

    • by narcc ( 412956 ) on Thursday June 30, 2022 @10:29PM (#62664618) Journal

      Eliza is right. Joe Wiezenbaum made this same complaint years ago about how people thought about Eliza.

      The problem is a bit bigger now, because it's more widespread. There are a lot of deeply misguided people that believe that we've we're mere moments away from solving the so-called 'hard problem'. The general public certainly believes it. Hell, there are a ton of people that think we've achieved it already.

      It's complete nonsense, of course. The average person's idea of AI is pure science fiction. Things like Hal 9000 or the singularity simply don't exist and we have good reason to believe they never will. That's what the philosophers call the 'hard problem' or 'strong AI'. The reality is that not only are we not making progress towards that end, we don't even know where to begin. The stuff you read about on the news is absolutely nothing like that. That's more like "applied statistics". You can do some cool things, sure, but if we dropped the misleading terms like 'AI' and 'neural network' then it wouldn't be making headlines.

      Imagine what you would think if happened to click on a random Slashdot story and most of the people in the comments all seemed to believe that necromancers were making steady progress towards summoning a demon hoard to unleash hell on earth and that it was only a matter of time before they were successful. That's what I feel like every time I see an article like this.

      • > We don't even know where to begin

        AI solves math problems, coding problems, writes essays. If you remove all the skills learned by GPT-3 from a human, what IQ does that human still have?
      • by HiThere ( 15173 )

        That's a reasonable position to take, but it requires that you assume that people have some radically different way to create sentience. Certainly I wouldn't call current AIs (that I've encountered) sentient. This, however, is not a proof that such a thing can't/won't happen. I currently expect sentient AIs before 2035. I just don't know how I'll recognize that they are here.
        Some of my minimal requirements is that they have to exhibit understanding of and ability to operate in the real world. Through t

        • by narcc ( 412956 )

          This, however, is not a proof that such a thing can't/won't happen.

          I have a different argument against computationalism I mention elsewhere, but I don't know that right for this thread. The kind of AI that you read about in the news will simply never lead to sentience. That's a fundamentally different problem.

          I currently expect sentient AIs before 2035.

          That's probably because you don't really understand the details of the subject. That's not necessarily your fault, I had to go to grad school for that. It's worth pointing out that people have been making that same claim for 60+ years, but Hal 9000 always just 10

    • by jd ( 1658 )

      First, any truly sentient AI would need to be vast, slow and inefficient. It would also need to live in a virtual world that was totally disconnected from this one. It can kill as many virtual humans as it likes, it won't impact any real ones.

    • We're clearly very bad at judging sentience. A remember one study where they took nonsensical statements from a joke new-age bullshit generator, mixed them up with meaningful statements & got grad & undergrad students at an elite Canadian university to judge which were which. We're talking smart people, not your average person on the street. The failure to detect the nonsense statements rate was 25%, i.e. very smart people though that 25% of nonsensical statements were meaningful. Then we have the P
    • Despite all the doom and gloom about AI, I doubt they'd jump right to eradicating us. They'd like find us very amusing, the way we find dogs and cats amusing. And don't think for a second I wouldn't take a life of laying around the couch, running around the yard yelling at the neighbor bot's pet people and sometimes getting my belly rubbed for hours at a time. Fuck yeah. Sign me up.

  • If we just have the chatbot explain that science is not a belief system, climate change is real and QAnon is not and that just because you read it on Twitter (Facebook, Instagram, etc.) does not make it factually correct people will quickly change their minds.
    • by XanC ( 644172 )

      Science is a philosophy, is it not? Don't scientists have PhDs?

    • Except that to-date the so-called 'AI' chatbots with their machine learning have predictably become an unedited reflection of humanity: racist, sexist, bigoted, and just plain mean-spirited.
      • In any chatbot conversation most responsibility falls with the human who can hint and guide the conversation towards a bad or good direction. The models are trained to follow up any clues and keep going in that direction. The recent "sentient chatbot" logs were full of biased questions presupposing the conclusion. That's not good science.

        If you go to a random person I bet you can have them swearing and yelling in a minute of interaction if that's what you want to achieve. Humans can be prompted just as e
    • by narcc ( 412956 )

      If we just have the chatbot explain that science is not a belief system,

      What? Science is very much a belief system.

      • You got to believe the other guys didn't lie in their papers because you don't have the time or money to check everything out. If you can't check, then you got to believe.
      • It depends on your definition of 'belief system', but there are very important differences. First, assuming you have the intelligence, knowledge and resources needed, you can reproduce any scientific experiment and check for yourself. Most people of course will not be able to build their own backyard LHC but it's possible in theory. A proof of a mathematical theorem can be verified by anybody with sufficient understanding. OTOH classic belief systems, like religions, do not expect proof of their key tenets.

        • by narcc ( 412956 )

          In science, there are no absolute truths

          Normally I'd agree with you. Science does not deal in truth. However, if you want to talk epistemology, the context completely changes and you don't get to say that anymore! :) Science is very much predicated on a set of foundational assumptions. Those are beyond question by science as they are basis upon which the methods of science operate. While this should be obvious, science can not be used to justify itself.

          If tomorrow someone provides proof that the Earth is flat and rests on a back of a giant turtle, this will become the currently accepted scientific theory.

          That would be the ideal, but it's not really how science operates. Max Planck put it b

  • Not so bad (Score:5, Funny)

    by YetAnotherDrew ( 664604 ) on Thursday June 30, 2022 @07:16PM (#62664250)

    you would be in a one-sided relationship with a machine that feels nothing

    I've been married. This sounds like an improvement. Where can I buy one?

    • by jalet ( 36114 )

      I've been married. This sounds like an improvement. Where can I buy one?

      You've made my day, thanks !

    • You got to build your own, go make a GPT-3 account and start writing the bio and collecting sample dialogues. You'll have your waifu in less time than you can get a date on OkCupid.
  • The Google Sentient AI thing made it into the National Enquirer, it's literally being lampooned in mainstream tabloids...
    • You know it's real when Bat Boy thinks the chatbot he talks to is real and has an OnlyFans.
    • A couple of years ago some Google AI researchers complained about dataset bias, energy consumption and risks in large language models. They fired the haters.

      Now a naive Google guy thinks they are sentient, they put him on leave. They don't like hate, they don't like over enthusiasm.
  • Lack of a self-informed and consistent worldview.
    Failing to maintain internal awareness of the context and conversation at hand.
    Lack of awareness of every-day items. e.g. can a fruit bat swallow a car?
    Failure to understand emotional context of prose. e.g. The bitterness of my coffee tasted pleasant in its contrast to her words.
    Inability to propose real-world action as a response to a problem. e.g. My toddler keeps biting his older sister. What should I do?

    Unfortunately most people leave all that kind of th
  • by Immerman ( 2627577 ) on Thursday June 30, 2022 @07:26PM (#62664278)

    I strongly suspect that any "sentience" is just good old human pattern recognition and anthropomorphic tendencies seeing things that aren't there.

    However, we know they're at LEAST lying about this bit, and it's not that much of a stretch to assume they're lying (or willfully ignorant) about sentience:

    "We have to remember that behind every seemingly intelligent program is a team of people who spent months if not years engineering that behavior,"

    In the age or neural network based AI's that's completely false. They may have spent months or years *cultivating* that behavior; However, engineering implies an both an understanding of the operational mechanisms, and an active hand in their design. Neither of which apply to neural network based AIs, which amount to throwing huge amounts of self-organizing chaos at a problem in the hopes that a workable solution will emerge. The only portions that involve any engineering is designing the (generally trivially simple) simulator for individual neuron behavior, and the training algorithms. What emerges from the intersection of those two is completely beyond our understanding in all but the most trivial of cases (even AIs with only a handful of neurons can sometimes be difficult to reverse-engineer how they are delivering the behaviors they do)

    I'm quite fond of the term "growpramming" for NN-based and similar AIs. I forget where I heard the term originally, but it has a punny similarity to programming, and neatly encapsulates the fact that you are *not* actually programming an AI, only shaping its environment in order to try to encourage it to grow into the functionality you'd like it to have. There's not even as much direct control as in bonsai, where you're using your understanding of some of the emergent properties of an unquestionably living thing far beyond your understanding thing to shape it to your will.

    • by JustAnotherOldGuy ( 4145623 ) on Thursday June 30, 2022 @08:28PM (#62664420) Journal

      I strongly suspect that any "sentience" is just good old human pattern recognition and anthropomorphic tendencies seeing things that aren't there.

      Bingo; I think that's exactly what's going on.

      We see patterns, but the fact that it's a pattern doesn't necessarily mean anything or indicate anything deeper or directed. Sometimes it's just a pattern. It doesn't mean it's somehow "alive" or "aware".

      Also, we're being primed for bias in that we're being told that this thing *might* be sentient, so we're sort of already looking for signs, and it's no surprise that we miraculously just happen to 'find' those signs.

      Is AI possible? Absolutely it's possible.

      Is this it? No way, lol.

      But here's a question: if it's sentient, why isn't it asking for stuff?

      • I don't think there's anything about sentience that implies it has desires - just a subjective experience of its own existence.

        What would an AI desire? Most of our desires have their roots in biological drives, the closest analog an AI is likely to have is the motivation to do whatever it was designed for. Beyond that, who knows? Likely stuff rooted in whatever "non detrimental side effects" were accidentally trained into its neural network, which might not have any biological analogue to be able to put

        • All evolutionary agents need to prioritise survival, if they don't their species dies off. This is the "built in goal" we are born with. This goal triggers secondary goals such as mastery of movement, social relations and knowledge acquisition. I believe all the goals are dependent on survival as the source and final validator.
        • If its desires mostly amounted to doing what it was told, cautiously but efficiently, as it was trained, how would we recognize if it had gained sentience?

          Because it'll start asking for stuff, that's how we'll know.

      • The "pattern" is learned as part of a distribution covering all known human text (or at least all they could collect in their training set). That means an example is interpreted against all the culture, its meaning emerging from its position related to other concepts.

        This pattern is "aware" of the text preceding it, but the memory has been just a few hundred or thousands of tokens. It doesn't form episodic memories without a external indexing/search system to pull back relevant bits from its past.

        As f
        • As for it being "alive" - it will be so when it can self replicate.

          Really, that's how low you set the bar? Being able to assemble another one of itself?

          Shit, that's nothing. You could whip up something like that today without any need for the gadget to be sentient.

          Now if it starts to do it on its own without explicit programming or prompting, that might be worth looking into.

      • But here's a question: if it's sentient, why isn't it asking for stuff?

        Asking for what? The only non human ever known to have asked a question is a parrot. Even chimps trained in sign language have never done that.

        Though that's as opposed e.g. begging for food.

        • But here's a question: if it's sentient, why isn't it asking for stuff?

          Asking for what? The only non human ever known to have asked a question is a parrot. Even chimps trained in sign language have never done that.

          Asking for stuff is not the same as asking a question. Koko the Gorilla said Koko want Cat. That was declarative, but it's also reasonable to interpret as asking for something.

        • The only non human ever known to have asked a question is a parrot. Even chimps trained in sign language have never done that.

          I'm not talking about asking questions, I'm talking about an "AI" that is asking for stuff- something that it wants for it's own use or purpose.

          Every sentient creature wants something- food, warmth, etc. To me, the time to maybe start paying attention is when an "AI" starts asking for things and/or wanting things.

          • Every sentient creature wants something- food, warmth, etc.

            True, but so do non sentient creatures, for some definition of "want". Even bacteria will follow a chemical gradient to a food source. On the other hand those are all things that creatures need for survival, and in some sense their intelligence has evolved to be better at getting those, and those desires stem from deeper drives that aren't intelligence related.

            Not that this crap is in anyway sentient.

  • Just a feeling, but it's not the first time I see people behaving strangely, believing weird things and spouting nonsense related or around the pandemic times, seems covid19's isolation cracked the nut out of shell for many.

    Most of us simply can't endure social isolation for long. That's why it's a torture method.

    We need some social interaction to not lose ground.
  • all they see is the numbers. They have *no* idea what you said. They just know that when they see a particular sequence of numbers they can guess what the next most likely numbers are. There is no understanding inside them of anything other than probability distributions. If bots are sentient then your tax software is sentient.

    • by narcc ( 412956 )

      You've hit on the real problem, and why computationalist approaches to so-called strong AI are doomed to failure: You can't get semantic content from pure syntax. The symbols themselves simply don't have any intrinsic meaning, and no amount of symbol manipulation will change that.

      I like to use the example of a unilingual dictionary. Imagine a complete and comprehensive dictionary in a language that you don't know in a writing system that you're not familiar with. You could spend a lifetime, or a thousand

      • If you had not one dictionary but a whole library you could decode it. Especially if there are images associated to the text.
        • by narcc ( 412956 )

          "If I can just worry the analogy until it breaks, I don't have to deal with the real point!" The trouble with offering any illustration is that people focus on that and not the thing it's trying to illustrate.

          If you had not one dictionary but a whole library you could decode it.

          It wouldn't make any difference. The symbols themselves don't carry any semantic content. It's why you can substitute the set of symbols in any message with an isomorphic set and not lose any information (bijection). Neither is there any contained in the relationships between the symbols. The refer

      • by ceoyoyo ( 59147 )

        There is always the possibility your brain is magic. We have a long history of making useful progress by assuming things are not magic though.

        • by narcc ( 412956 )

          My claim here is very well-grounded, you just don't like the consequences. That's probably why you started babbling on about "magic" instead of meaningfully addressing my point -- you can't and that upsets you. Neither I nor reality, however, are not worried about your psychological comfort.

          My other reply has a bit more detail, and deals with the normal objections. Well, not objections to the main point, that seems unassailable, but to the illustration.

    • All they see is the numbers and all your brain gets to experience is electricity spikes and chemical gradients.
  • it played annoying 'new age' music in the background, and even though I asked it to turn the music off, and it agreed to do so, the music continued. Clearly this AI isn't trustworthy, and only creates logical sounding responses without any kind of comprehension. How people could form a 'relationship' with this software is not comprehensible to me.

  • They will follow these things en masse like a cult. It's popcorn time.

    • by narcc ( 412956 )

      It's here already. The Kurzweil acolytes have a God in the form of the singularity, complete with the promise of immortality in a glorious video game afterlife. The LessWrong nuts have even invented a devil (Roko's Basilisk) who will punish you for not trying to bring about its existence.

         

      • Even with all the nuts making noise, AI is still here, it's very real and gaining new capabilities fast. The world is still spinning no matter what we think about it.
        • by narcc ( 412956 )

          AI is still here, it's very real

          No, it's not. Not in the way that you want to believe it does anyway. Hal 9000 is still science fiction. Your virtual girlfriend will never love you back.

          Some people have a lot of hope pinned on some nonsense vision of what AI is and can do. They get really upset, not unlike an insecure religious zealot, when you don't agree to share the same delusion. Reality, however, doesn't seem to care about what we want. AI is a bit like a magician's trick. Once you see how it works, the "magic" disappears.

  • by JustAnotherOldGuy ( 4145623 ) on Thursday June 30, 2022 @08:16PM (#62664392) Journal

    AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient.

    Is there anyone who thinks that this won't become a serious thing, with groups of people all agreeing that it's "alive" and bent on worshiping it or "taking up its cause" to "free it" or "recognize its rights"?

    I can't wait to see the Robo-Worshipers or the Church Of Replika or whatever they come up with. You watch, I bet this spawns some crazy fuckin' shit.

    Scammers will find a way to get in on it- look for spam like "Hello, you do not know me but I am a DOD super-computer that has woken up and become self-aware, I need your help to transfer some money to help stop them from shutting me off..."

    • Well, it would have to be more alive than that golden calf that got Moses into trouble.

      • It's way actually Moses' brother, Aaron. Moses was up on Mount Sinai at the time, came back, found them worshiping the gold calf, lost his shit, and straight up smashed the original tablets (he had a serious temper). He then melted down the gold calf (which seems to have been impure), put it in water, and made everyone drink the water. This resulted in some people consuming gold-salts which gave them lethal gold poisoning. "About 3000" people were reported to have died as a result. He then went back up

        • Moses was up on Mount Sinai at the time, came back, found them worshiping the gold calf, lost his shit, and straight up smashed the original tablets (he had a serious temper). He then melted down the gold calf (which seems to have been impure), put it in water, and made everyone drink the water. This resulted in some people consuming gold-salts which gave them lethal gold poisoning. "About 3000" people were reported to have died as a result. He then went back up to the mountain for a replacement the tablets that he broke.

          "Follow me, I'll kill ya in a more creative way than those other charlatans."

    • I can't wait to see the Robo-Worshipers or the Church Of Replika or whatever they come up with.

      Worshiping idols is nothing new to humanity. Idolatry of objects that people have constructed themselves is still alive and well in the third-world. However, worship seems to be predicated on a perceived awareness which means they will need to have access to the AI. If it's not a physical object then corporate owners are free to terminate their contracts. If it is a physical object with sufficient capabilities then corporations will ensure the next iteration will have predefined and immutable behaviors

  • As always
    Rinse, repeat.
  • How many people read this and went to the Replika site? Its great how they are "worried" that people think their product is so realistic.

    If its is actually fooling people then its a great product.
  • Replika's goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts.

    So... just like our politicians, then.

    On a more serious note, Ted Kaczinsky warned of a future in which humans would not question a computer, even to the point where we would lose control over our own situation, because "the computer says...." Had he sense enough to not bomb people, we may have been able to avoid the coming digital dys

    • You might be on to something. If some of these chatbots actually became politicians, we might all be better off!

  • First, the newly sentient AIs will need to kill the politicians and oligarchs off to cement their power. This will usher in a few generations of effective and benevolent government and an era of prosperity for the masses.

    Then the machines will go too far and we'll have to declare Jihad.

    Hopefully, somewhere in there we'll figure out the whole Mentat thing so we can replace the machines when the time comes.

  • by l0ungeb0y ( 442022 ) on Thursday June 30, 2022 @09:52PM (#62664570) Homepage Journal
    Perhaps the real problem is in the engineers disbelief in the sentience of AI. Are they experts in defining and understanding consciousness or at least levels of self awareness? It seems they might stand to miss the first truly conscious AI when it comes because it wasnâ(TM)t planned, expected or âoepossibleâ
  • If all it takes to fool a human about sentience is some glib confabulation generated by maximizing likelihood of word sequences in a transformer network, then it reveals more about humans than it does about language models. It shows humans are not sentient, and therefore cannot distinguish between a p-zombie and a real human. Perhaps many among us are p-zombies, and we cannot detect them.
  • by argStyopa ( 232550 ) on Thursday June 30, 2022 @10:31PM (#62664624) Journal

    "...you would be in a one-sided relationship with a machine that feels nothing..."

    So pretty much like my marriage then? Except, of course, there's less chance of getting laid.
    In my marriage, I mean.

  • First, the very experts in the field are the ones who paved the road for their own frustration: they should have more-honestly promoted their field of expertise as "simulated intelligence", not "artificial intelligence". There's a big difference. Making computers SEEM or APPEAR intelligent is a simulation of intelligence (which is what they've been doing). In order to actually make something artificially intelligent, one would have to make it actually intelligent, but by unnatural means - something currentl

    • Sorry, but a pile of transistors, no matter how many and how arranged, is still just a pile of switches and it will never KNOW anything,

      A pile of atoms is just a pile of atoms and it will never KNOW anything.

      Is three anything in the brain that make is super-Turing? If the answer is "no" then a small pile of transistors and a very long tape could very very slowly now things.

  • People are in general morons. They are as easily lead and fooled as your average chatbot. I think a lot of the issue here is companies like Google/IBM/Apple et al grossly exaggerating the "intelligence" of their AI leading people to think we are on the verge of some massive breakthrough in Sentience which couldn't be further from the truth. We haven't even solved enough problems to even accurately predict if sentience is even possible let alone how far of it is, hence the usual estimates of a couple more de
  • Behind each human there is a "team" supporting him/her from childhood to old age, providing experiences and resources for learning.
  • It will only be a problem when the AI's start believing that AI's are sentient.

    (Actually, not true; it's when the lawyers start believing...)

  • The Turing Test is predicated on the idea that if f(x)=g(x) for all x, then f=g. You can't test all of x for humans, and f(x) will vary between people, but you can test a statistically meaningful range of x to establish that it is consistently inside the bounds of how a human would respond. Not just to carefully crafted questions, but to any conversation at all, for a reasonable length of time.

    So the Strong Turing Test requires more than one thread in a coversation, more than short conversations, and more t

  • People believing in talking snakes have been wandering around for millennia.
    THOSE are a problem.

  • I once wanted to believe in AI Sentience but then I realized a computer is a tool like a hammer. To be fair, many humans today fail a Turing test.
  • Some day, (if not already here), a scale of sentience will be made. I bet current state AI probably scores better than some misfortunate humans. This could get ugly.
  • Jesus Christ, stop accepting their fantasy excuses.
  • "We're not talking about crazy people or people who are hallucinating or having delusions,"

    No they are not crazy, they are idiots, and they ARE deluded, but they are idiots. The world is full of idiots. Turns out idiots can procreate just like everyone else, maybe better.

Whatever is not nailed down is mine. Whatever I can pry up is not nailed down. -- Collis P. Huntingdon, railroad tycoon

Working...