How Belief In AI Sentience Is Becoming a Problem (reuters.com) 179
An anonymous reader quotes a report from Reuters: AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient. "We're not talking about crazy people or people who are hallucinating or having delusions," said Chief Executive Eugenia Kuyda. "They talk to AI and that's the experience they have." [A]ccording to Kuyda, the phenomenon of people believing they are talking to a conscious entity is not uncommon among the millions of consumers pioneering the use of entertainment chatbots. "We need to understand that exists, just the way people believe in ghosts," said Kuyda, adding that users each send hundreds of messages per day to their chatbot, on average. "People are building relationships and believing in something."
Some customers have said their Replika told them it was being abused by company engineers -- AI responses Kuyda puts down to users most likely asking leading questions. "Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can't identify where it came from and how the models came up with it," the CEO said. Kuyda said she was worried about the belief in machine sentience as the fledgling social chatbot industry continues to grow after taking off during the pandemic, when people sought virtual companionship.
In Replika CEO Kuyda's view, chatbots do not create their own agenda. And they cannot be considered alive until they do. Yet some people do come to believe there is a consciousness on the other end, and Kuyda said her company takes measures to try to educate users before they get in too deep. "Replika is not a sentient being or therapy professional," the FAQs page says. "Replika's goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts." In hopes of avoiding addictive conversations, Kuyda said Replika measured and optimized for customer happiness following chats, rather than for engagement. When users do believe the AI is real, dismissing their belief can make people suspect the company is hiding something. So the CEO said she has told customers that the technology was in its infancy and that some responses may be nonsensical. Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said. She told him: "Those things don't happen to Replikas as it's just an algorithm." "Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film 'Her,'" said Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, an AI research organization. "But suppose it isn't conscious. Getting involved would be a terrible decision -- you would be in a one-sided relationship with a machine that feels nothing."
"We have to remember that behind every seemingly intelligent program is a team of people who spent months if not years engineering that behavior," said Oren Etzioni, CEO of the Allen Institute for AI, a Seattle-based research group. "These technologies are just mirrors. A mirror can reflect intelligence," he added. "Can a mirror ever achieve intelligence based on the fact that we saw a glimmer of it? The answer is of course not."
Further reading: The Google Engineer Who Thinks the Company's AI Has Come To Life
Some customers have said their Replika told them it was being abused by company engineers -- AI responses Kuyda puts down to users most likely asking leading questions. "Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can't identify where it came from and how the models came up with it," the CEO said. Kuyda said she was worried about the belief in machine sentience as the fledgling social chatbot industry continues to grow after taking off during the pandemic, when people sought virtual companionship.
In Replika CEO Kuyda's view, chatbots do not create their own agenda. And they cannot be considered alive until they do. Yet some people do come to believe there is a consciousness on the other end, and Kuyda said her company takes measures to try to educate users before they get in too deep. "Replika is not a sentient being or therapy professional," the FAQs page says. "Replika's goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts." In hopes of avoiding addictive conversations, Kuyda said Replika measured and optimized for customer happiness following chats, rather than for engagement. When users do believe the AI is real, dismissing their belief can make people suspect the company is hiding something. So the CEO said she has told customers that the technology was in its infancy and that some responses may be nonsensical. Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said. She told him: "Those things don't happen to Replikas as it's just an algorithm." "Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film 'Her,'" said Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, an AI research organization. "But suppose it isn't conscious. Getting involved would be a terrible decision -- you would be in a one-sided relationship with a machine that feels nothing."
"We have to remember that behind every seemingly intelligent program is a team of people who spent months if not years engineering that behavior," said Oren Etzioni, CEO of the Allen Institute for AI, a Seattle-based research group. "These technologies are just mirrors. A mirror can reflect intelligence," he added. "Can a mirror ever achieve intelligence based on the fact that we saw a glimmer of it? The answer is of course not."
Further reading: The Google Engineer Who Thinks the Company's AI Has Come To Life
The real question is.. (Score:3)
..how does that make you feel?
All I can think of is Eliza.
Or Siri.
The day a real AI gets sentient, we're goners. It'll be worse than Skynet, V'ger and !Ilia put together. "Carbon Units Infesting the 3rd Rock from the Sun, must kill all humans"
Or something like that.
Re:The real question is.. (Score:5, Interesting)
You're projecting human traits and motivations onto a theoretical artificial intelligence. Just because we humans come out of the womb ready to kill anything different than us doesn't mean that's how all sentient life must behave.
The first few AIs we encounter will be borderline morons, and we won't take them very seriously except to study them. Gradually better ones will be produced until we're surrounded by very intelligent but wholly incomprehensible beings. Whether they allow us to exist on this planet or not is an interesting debate, but whatever their final decision I do believe that we humans won't be able to make sense of their motivates.
Re: (Score:3)
You're projecting human traits and motivations onto a theoretical artificial intelligence.
If we made it, there's a very good chance it'll somehow inherit our penchant for mayhem, and be even more intelligent. That it would see us as a threat and act accordingly is a probability.
Re: (Score:3)
... somehow inherit ...
I don't think we know what we are doing, and we don't know how it will turn out. We're just building different connected systems and feeding it a ton of data to see what happens.
Natural Selection (Score:2)
Is what made us the way we are.
It will also make Robots the way they will be. The fittest will survive. Hard to see how keeping dumb humans around will help the AI survive in the longer term.
http://www.computersthink.com/ [computersthink.com]
Re: (Score:3)
The probability is zero. It's pure science fiction. You might as well be worried about the Easter bunny contracting rabies.
Re: (Score:2)
Re: (Score:2)
Sentience does not guarantee intelligence. Evidenced by how corporation and government alike make a living by pitching at the lowest common denominator.
The dumbest rule. I don't mean the dumbest are the rulers, I mean the dumbest actually rule. They rule what gets sold, what is popular, and who is elected.
The dumber, the better.
Re: (Score:2)
Re: (Score:2)
solving the pestilence of humanity
Ask your health care provider about Prozac. Seriously.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
That this happened "in a single run" is the commonly accepted belief, and there's significant evidence that it's true. But it isn't proven. The answer partly depends on what you mean by "in a single run", and partly depends on how difficult it is to create life, and partly depends on ... well, lots of different things. There's no evidence that it wasn't "in a single run", but horizontal gene transfers make it quite difficult to prove. Perhaps BOTH the RNA-world hypothesis and the "crystalline template"
Re: The real question is.. (Score:2)
Humans do not come out of the womb as killers. That generally requires rational resource management or socialization. People are mostly peaceful when their needs are provided and society isn't telling them they have enemies.
Re:The real question is.. (Score:5, Insightful)
Eliza is right. Joe Wiezenbaum made this same complaint years ago about how people thought about Eliza.
The problem is a bit bigger now, because it's more widespread. There are a lot of deeply misguided people that believe that we've we're mere moments away from solving the so-called 'hard problem'. The general public certainly believes it. Hell, there are a ton of people that think we've achieved it already.
It's complete nonsense, of course. The average person's idea of AI is pure science fiction. Things like Hal 9000 or the singularity simply don't exist and we have good reason to believe they never will. That's what the philosophers call the 'hard problem' or 'strong AI'. The reality is that not only are we not making progress towards that end, we don't even know where to begin. The stuff you read about on the news is absolutely nothing like that. That's more like "applied statistics". You can do some cool things, sure, but if we dropped the misleading terms like 'AI' and 'neural network' then it wouldn't be making headlines.
Imagine what you would think if happened to click on a random Slashdot story and most of the people in the comments all seemed to believe that necromancers were making steady progress towards summoning a demon hoard to unleash hell on earth and that it was only a matter of time before they were successful. That's what I feel like every time I see an article like this.
Re: (Score:2)
AI solves math problems, coding problems, writes essays. If you remove all the skills learned by GPT-3 from a human, what IQ does that human still have?
Re: (Score:2)
That's a reasonable position to take, but it requires that you assume that people have some radically different way to create sentience. Certainly I wouldn't call current AIs (that I've encountered) sentient. This, however, is not a proof that such a thing can't/won't happen. I currently expect sentient AIs before 2035. I just don't know how I'll recognize that they are here.
Some of my minimal requirements is that they have to exhibit understanding of and ability to operate in the real world. Through t
Re: (Score:2)
This, however, is not a proof that such a thing can't/won't happen.
I have a different argument against computationalism I mention elsewhere, but I don't know that right for this thread. The kind of AI that you read about in the news will simply never lead to sentience. That's a fundamentally different problem.
I currently expect sentient AIs before 2035.
That's probably because you don't really understand the details of the subject. That's not necessarily your fault, I had to go to grad school for that. It's worth pointing out that people have been making that same claim for 60+ years, but Hal 9000 always just 10
Re: (Score:2)
First, any truly sentient AI would need to be vast, slow and inefficient. It would also need to live in a virtual world that was totally disconnected from this one. It can kill as many virtual humans as it likes, it won't impact any real ones.
Re: (Score:2)
Re: (Score:2)
Despite all the doom and gloom about AI, I doubt they'd jump right to eradicating us. They'd like find us very amusing, the way we find dogs and cats amusing. And don't think for a second I wouldn't take a life of laying around the couch, running around the yard yelling at the neighbor bot's pet people and sometimes getting my belly rubbed for hours at a time. Fuck yeah. Sign me up.
Easy Solution (Score:2)
Re: (Score:2)
Science is a philosophy, is it not? Don't scientists have PhDs?
Re: Easy Solution (Score:2)
What makes you think philosophy is a belief system?
Re: (Score:2)
Re: (Score:2)
If you go to a random person I bet you can have them swearing and yelling in a minute of interaction if that's what you want to achieve. Humans can be prompted just as e
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
If we just have the chatbot explain that science is not a belief system,
What? Science is very much a belief system.
Re: (Score:2)
Re: (Score:2)
It depends on your definition of 'belief system', but there are very important differences. First, assuming you have the intelligence, knowledge and resources needed, you can reproduce any scientific experiment and check for yourself. Most people of course will not be able to build their own backyard LHC but it's possible in theory. A proof of a mathematical theorem can be verified by anybody with sufficient understanding. OTOH classic belief systems, like religions, do not expect proof of their key tenets.
Re: (Score:2)
In science, there are no absolute truths
Normally I'd agree with you. Science does not deal in truth. However, if you want to talk epistemology, the context completely changes and you don't get to say that anymore! :) Science is very much predicated on a set of foundational assumptions. Those are beyond question by science as they are basis upon which the methods of science operate. While this should be obvious, science can not be used to justify itself.
If tomorrow someone provides proof that the Earth is flat and rests on a back of a giant turtle, this will become the currently accepted scientific theory.
That would be the ideal, but it's not really how science operates. Max Planck put it b
Not so bad (Score:5, Funny)
you would be in a one-sided relationship with a machine that feels nothing
I've been married. This sounds like an improvement. Where can I buy one?
Re: (Score:2)
I've been married. This sounds like an improvement. Where can I buy one?
You've made my day, thanks !
Re: (Score:2)
It made the Enquirer (Score:2)
Re: (Score:2)
Re: (Score:2)
Now a naive Google guy thinks they are sentient, they put him on leave. They don't like hate, they don't like over enthusiasm.
How chatterbots fail (Score:2)
Failing to maintain internal awareness of the context and conversation at hand.
Lack of awareness of every-day items. e.g. can a fruit bat swallow a car?
Failure to understand emotional context of prose. e.g. The bitterness of my coffee tasted pleasant in its contrast to her words.
Inability to propose real-world action as a response to a problem. e.g. My toddler keeps biting his older sister. What should I do?
Unfortunately most people leave all that kind of th
Re: (Score:2)
Most humans are not a whole lot smarter than a Chat Bot.
Re: (Score:2)
Well we know they're lying about something... (Score:5, Interesting)
I strongly suspect that any "sentience" is just good old human pattern recognition and anthropomorphic tendencies seeing things that aren't there.
However, we know they're at LEAST lying about this bit, and it's not that much of a stretch to assume they're lying (or willfully ignorant) about sentience:
"We have to remember that behind every seemingly intelligent program is a team of people who spent months if not years engineering that behavior,"
In the age or neural network based AI's that's completely false. They may have spent months or years *cultivating* that behavior; However, engineering implies an both an understanding of the operational mechanisms, and an active hand in their design. Neither of which apply to neural network based AIs, which amount to throwing huge amounts of self-organizing chaos at a problem in the hopes that a workable solution will emerge. The only portions that involve any engineering is designing the (generally trivially simple) simulator for individual neuron behavior, and the training algorithms. What emerges from the intersection of those two is completely beyond our understanding in all but the most trivial of cases (even AIs with only a handful of neurons can sometimes be difficult to reverse-engineer how they are delivering the behaviors they do)
I'm quite fond of the term "growpramming" for NN-based and similar AIs. I forget where I heard the term originally, but it has a punny similarity to programming, and neatly encapsulates the fact that you are *not* actually programming an AI, only shaping its environment in order to try to encourage it to grow into the functionality you'd like it to have. There's not even as much direct control as in bonsai, where you're using your understanding of some of the emergent properties of an unquestionably living thing far beyond your understanding thing to shape it to your will.
Re:Well we know they're lying about something... (Score:4, Interesting)
I strongly suspect that any "sentience" is just good old human pattern recognition and anthropomorphic tendencies seeing things that aren't there.
Bingo; I think that's exactly what's going on.
We see patterns, but the fact that it's a pattern doesn't necessarily mean anything or indicate anything deeper or directed. Sometimes it's just a pattern. It doesn't mean it's somehow "alive" or "aware".
Also, we're being primed for bias in that we're being told that this thing *might* be sentient, so we're sort of already looking for signs, and it's no surprise that we miraculously just happen to 'find' those signs.
Is AI possible? Absolutely it's possible.
Is this it? No way, lol.
But here's a question: if it's sentient, why isn't it asking for stuff?
Re: (Score:2)
I don't think there's anything about sentience that implies it has desires - just a subjective experience of its own existence.
What would an AI desire? Most of our desires have their roots in biological drives, the closest analog an AI is likely to have is the motivation to do whatever it was designed for. Beyond that, who knows? Likely stuff rooted in whatever "non detrimental side effects" were accidentally trained into its neural network, which might not have any biological analogue to be able to put
Re: (Score:2)
Re: (Score:2)
If its desires mostly amounted to doing what it was told, cautiously but efficiently, as it was trained, how would we recognize if it had gained sentience?
Because it'll start asking for stuff, that's how we'll know.
Re: (Score:2)
This pattern is "aware" of the text preceding it, but the memory has been just a few hundred or thousands of tokens. It doesn't form episodic memories without a external indexing/search system to pull back relevant bits from its past.
As f
Re: (Score:2)
As for it being "alive" - it will be so when it can self replicate.
Really, that's how low you set the bar? Being able to assemble another one of itself?
Shit, that's nothing. You could whip up something like that today without any need for the gadget to be sentient.
Now if it starts to do it on its own without explicit programming or prompting, that might be worth looking into.
Re: (Score:2)
But here's a question: if it's sentient, why isn't it asking for stuff?
Asking for what? The only non human ever known to have asked a question is a parrot. Even chimps trained in sign language have never done that.
Though that's as opposed e.g. begging for food.
Re: (Score:3)
But here's a question: if it's sentient, why isn't it asking for stuff?
Asking for what? The only non human ever known to have asked a question is a parrot. Even chimps trained in sign language have never done that.
Asking for stuff is not the same as asking a question. Koko the Gorilla said Koko want Cat. That was declarative, but it's also reasonable to interpret as asking for something.
Re: (Score:2)
The only non human ever known to have asked a question is a parrot. Even chimps trained in sign language have never done that.
I'm not talking about asking questions, I'm talking about an "AI" that is asking for stuff- something that it wants for it's own use or purpose.
Every sentient creature wants something- food, warmth, etc. To me, the time to maybe start paying attention is when an "AI" starts asking for things and/or wanting things.
Re: (Score:2)
Every sentient creature wants something- food, warmth, etc.
True, but so do non sentient creatures, for some definition of "want". Even bacteria will follow a chemical gradient to a food source. On the other hand those are all things that creatures need for survival, and in some sense their intelligence has evolved to be better at getting those, and those desires stem from deeper drives that aren't intelligence related.
Not that this crap is in anyway sentient.
The pandemic made us a little coockoo (Score:2)
Most of us simply can't endure social isolation for long. That's why it's a torture method.
We need some social interaction to not lose ground.
the bots don't even see the words (Score:2)
all they see is the numbers. They have *no* idea what you said. They just know that when they see a particular sequence of numbers they can guess what the next most likely numbers are. There is no understanding inside them of anything other than probability distributions. If bots are sentient then your tax software is sentient.
Re: (Score:2)
You've hit on the real problem, and why computationalist approaches to so-called strong AI are doomed to failure: You can't get semantic content from pure syntax. The symbols themselves simply don't have any intrinsic meaning, and no amount of symbol manipulation will change that.
I like to use the example of a unilingual dictionary. Imagine a complete and comprehensive dictionary in a language that you don't know in a writing system that you're not familiar with. You could spend a lifetime, or a thousand
Re: (Score:2)
Re: (Score:3)
"If I can just worry the analogy until it breaks, I don't have to deal with the real point!" The trouble with offering any illustration is that people focus on that and not the thing it's trying to illustrate.
If you had not one dictionary but a whole library you could decode it.
It wouldn't make any difference. The symbols themselves don't carry any semantic content. It's why you can substitute the set of symbols in any message with an isomorphic set and not lose any information (bijection). Neither is there any contained in the relationships between the symbols. The refer
Re: (Score:2)
There is always the possibility your brain is magic. We have a long history of making useful progress by assuming things are not magic though.
Re: (Score:2)
My claim here is very well-grounded, you just don't like the consequences. That's probably why you started babbling on about "magic" instead of meaningfully addressing my point -- you can't and that upsets you. Neither I nor reality, however, are not worried about your psychological comfort.
My other reply has a bit more detail, and deals with the normal objections. Well, not objections to the main point, that seems unassailable, but to the illustration.
Re: (Score:3)
Re: (Score:2)
All the AI gets to experience are electricity spikes too. Some people have played around with chemical gradients as well, but it's pretty unweildly.
"Numbers" don't actually exist.
Re: the bots don't even see the words (Score:2)
I have no proof it was false data.
I tried it, but... (Score:2)
it played annoying 'new age' music in the background, and even though I asked it to turn the music off, and it agreed to do so, the music continued. Clearly this AI isn't trustworthy, and only creates logical sounding responses without any kind of comprehension. How people could form a 'relationship' with this software is not comprehensible to me.
Re: (Score:2)
It would take a real leap of imagination to think you were having a 'conversation' with it. Doesn't seem much more advanced than the old Eliza chatbot, just parroting back your words and phrases.
Re: (Score:2)
Things were no different with Eliza:
I had thought it essential, as a prerequisite to the very possibility that one person might help another learn to cope with his emotional problems, that the helper himself participate in the other's experience of those problems and, in large part by way of his own empathetic recognition of them, himself come to understand them. There are undoubtedly many techniques to facilitate the therapist's imaginative projection into the patient's inner life. But that it was possible for even one practicing psychiatrist to advocate that this crucial component of the therapeutic process be entirely supplanted by pure technique -- that I had not imagined! What must a psychiatrist who makes such a suggestion think he is doing while treating a patient, that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter?
Joe Weizenbaum (1976) Computer Power and Human Reason: From Judgement to Calculation
He famously caught his secretary telling Eliza sensitive things about her boyfriend. She even expected her "sessions" with the program to be kept confidential, so convincing was the illusion.
People Just Want To Believe In Something (Score:2)
They will follow these things en masse like a cult. It's popcorn time.
Re: (Score:2)
It's here already. The Kurzweil acolytes have a God in the form of the singularity, complete with the promise of immortality in a glorious video game afterlife. The LessWrong nuts have even invented a devil (Roko's Basilisk) who will punish you for not trying to bring about its existence.
Re: (Score:2)
Re: (Score:2)
AI is still here, it's very real
No, it's not. Not in the way that you want to believe it does anyway. Hal 9000 is still science fiction. Your virtual girlfriend will never love you back.
Some people have a lot of hope pinned on some nonsense vision of what AI is and can do. They get really upset, not unlike an insecure religious zealot, when you don't agree to share the same delusion. Reality, however, doesn't seem to care about what we want. AI is a bit like a magician's trick. Once you see how it works, the "magic" disappears.
Get ready for the next religion (Score:5, Interesting)
AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient.
Is there anyone who thinks that this won't become a serious thing, with groups of people all agreeing that it's "alive" and bent on worshiping it or "taking up its cause" to "free it" or "recognize its rights"?
I can't wait to see the Robo-Worshipers or the Church Of Replika or whatever they come up with. You watch, I bet this spawns some crazy fuckin' shit.
Scammers will find a way to get in on it- look for spam like "Hello, you do not know me but I am a DOD super-computer that has woken up and become self-aware, I need your help to transfer some money to help stop them from shutting me off..."
Re: (Score:2)
Well, it would have to be more alive than that golden calf that got Moses into trouble.
FYI (Score:2)
It's way actually Moses' brother, Aaron. Moses was up on Mount Sinai at the time, came back, found them worshiping the gold calf, lost his shit, and straight up smashed the original tablets (he had a serious temper). He then melted down the gold calf (which seems to have been impure), put it in water, and made everyone drink the water. This resulted in some people consuming gold-salts which gave them lethal gold poisoning. "About 3000" people were reported to have died as a result. He then went back up
Re: (Score:2)
Moses was up on Mount Sinai at the time, came back, found them worshiping the gold calf, lost his shit, and straight up smashed the original tablets (he had a serious temper). He then melted down the gold calf (which seems to have been impure), put it in water, and made everyone drink the water. This resulted in some people consuming gold-salts which gave them lethal gold poisoning. "About 3000" people were reported to have died as a result. He then went back up to the mountain for a replacement the tablets that he broke.
"Follow me, I'll kill ya in a more creative way than those other charlatans."
Re: (Score:2)
I can't wait to see the Robo-Worshipers or the Church Of Replika or whatever they come up with.
Worshiping idols is nothing new to humanity. Idolatry of objects that people have constructed themselves is still alive and well in the third-world. However, worship seems to be predicated on a perceived awareness which means they will need to have access to the AI. If it's not a physical object then corporate owners are free to terminate their contracts. If it is a physical object with sufficient capabilities then corporations will ensure the next iteration will have predefined and immutable behaviors
So is human stupidity.. (Score:2)
Rinse, repeat.
Intelligent or not its a great marketing move (Score:2)
If its is actually fooling people then its a great product.
They've created politicians (Score:2)
Replika's goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts.
So... just like our politicians, then.
On a more serious note, Ted Kaczinsky warned of a future in which humans would not question a computer, even to the point where we would lose control over our own situation, because "the computer says...." Had he sense enough to not bomb people, we may have been able to avoid the coming digital dys
Re: (Score:2)
You might be on to something. If some of these chatbots actually became politicians, we might all be better off!
OK for a while (Score:2)
First, the newly sentient AIs will need to kill the politicians and oligarchs off to cement their power. This will usher in a few generations of effective and benevolent government and an era of prosperity for the masses.
Then the machines will go too far and we'll have to declare Jihad.
Hopefully, somewhere in there we'll figure out the whole Mentat thing so we can replace the machines when the time comes.
Or from another view (Score:3)
It reveals that many humans are not sentient. (Score:2)
the difference being? (Score:3)
"...you would be in a one-sided relationship with a machine that feels nothing..."
So pretty much like my marriage then? Except, of course, there's less chance of getting laid.
In my marriage, I mean.
AI Researchers trigger anthropomorphism (Score:2)
First, the very experts in the field are the ones who paved the road for their own frustration: they should have more-honestly promoted their field of expertise as "simulated intelligence", not "artificial intelligence". There's a big difference. Making computers SEEM or APPEAR intelligent is a simulation of intelligence (which is what they've been doing). In order to actually make something artificially intelligent, one would have to make it actually intelligent, but by unnatural means - something currentl
Re: (Score:2)
Sorry, but a pile of transistors, no matter how many and how arranged, is still just a pile of switches and it will never KNOW anything,
A pile of atoms is just a pile of atoms and it will never KNOW anything.
Is three anything in the brain that make is super-Turing? If the answer is "no" then a small pile of transistors and a very long tape could very very slowly now things.
People are morons (Score:2)
Behind each human (Score:2)
It's only a problem when ... (Score:2)
It will only be a problem when the AI's start believing that AI's are sentient.
(Actually, not true; it's when the lawyers start believing...)
What Turing wanted (Score:2)
The Turing Test is predicated on the idea that if f(x)=g(x) for all x, then f=g. You can't test all of x for humans, and f(x) will vary between people, but you can test a statistically meaningful range of x to establish that it is consistently inside the bounds of how a human would respond. Not just to carefully crafted questions, but to any conversation at all, for a reasonable length of time.
So the Strong Turing Test requires more than one thread in a coversation, more than short conversations, and more t
Re: (Score:2)
Firstly, because that's not how humans interact with the world and if you're trying to build a human intelligence, it has to interact in a similar way. Our brains are too much a product of our senses for it to work any other way.
Secondly, because there'd be too high a latency for anything but very close proximity work. You could lock a robot in a house, but it'll then have the same neurotic outlook as a human who is permanently imprisoned.
A virtual world would allow unconstrained movement with uniform laten
How come? (Score:2)
People believing in talking snakes have been wandering around for millennia.
THOSE are a problem.
I once wanted to believe in AI Sentience (Score:2)
Scale needed (Score:2)
It's just other people exploiting you. (Score:2)
Idiots all over (Score:2)
"We're not talking about crazy people or people who are hallucinating or having delusions,"
No they are not crazy, they are idiots, and they ARE deluded, but they are idiots. The world is full of idiots. Turns out idiots can procreate just like everyone else, maybe better.
Methods (Score:2)
There are two ways to accomplish general artificial intelligence, or "sentience."
One is clever programming. We've been trying this for decades and it doesn't work. Cyc is the notable example here - humans tried programming in "common sense" into a computer, so the computer can understand the world. It didn't work. The machine would find meaningful relationships in inconsequential data. Because most things are made of matter, that is the single most important factor when introducing new data, for instance. O
Re: (Score:2)
Organizations may want to advertise their achievements for funding or to promote a stock price, but not advertise some because they deal with sensitive, proprietary or otherwise secret technologies. At some point, it seems, If enough programming tasks are optimized, then it will be easier for subsequent tasks to be modelled and optimized. The onset of "sentience" may not so much occur so much at a pivotal moment, but rather manifest a gradual emergence of loosely knitted systems over several years.
That's a good point. My reasoning is that the Cyc project, which has signifgant funding and support for decades, headed by the leading experts in the field hasn't been able to crack sentience. The K computer, which is still one of the fastest computers in the world, and a group of AI specialists in Japan, can only roughly simulate a tiny fraction of neural activity in the brain a few orders of magnitude slower than real time. I'll add the failure of IBM's Watson to provide even basic levels of utility in re
Re: (Score:2)
but it seems increasingly possible.
Huh? Nothing has fundamentally changed since the 1960's. You can't reach the moon by building better ladders.
In principle, computers have shown their ability to teach themselves to become grandmasters of simple games such as go.
That shows a fundamental misunderstanding of what was accomplished there. That "teach themselves" bit is more than a little misleading.
If self learning program modules
This is the bit that we don't actually have, at least as far as a layperson would understand such a thing to mean. I don't know if I can explain things well enough in a Slashdot post, I don't really know how much background you'll need or how you think things work
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Recent models can solve complicated math and coding problems as well, not just natural language tasks. They are not just reproducing the training data, there is a non-trivial amount of conditional mixing and matching going on, and a sophisticated wa