
Is Debating AI Sentience a Dangerous Distraction? (msn.com) 96
"A Google software engineer was suspended after going public with his claims of encountering 'sentient' artificial intelligence on the company's servers," writes Bloomberg, "spurring a debate about how and whether AI can achieve consciousness."
"Researchers say it's an unfortunate distraction from more pressing issues in the industry." Google put him on leave for sharing confidential information and said his concerns had no basis in fact — a view widely held in the AI community. What's more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.
Lemoine's stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. "Lots of effort has been put into this sideshow," she said. "The problem is, the more this technology gets sold as artificial intelligence — let alone something sentient — the more people are willing to go along with AI systems" that can cause real-world harm. Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system's apparent sentience, Bender said, it creates a distance from the AI creators' direct responsibility for any flaws or biases in the programs....
"Instead of discussing the harms of these companies," such as sexism, racism and centralization of power created by these AI systems, everyone "spent the whole weekend discussing sentience," Timnit Gebru, formerly co-lead of Google's ethical AI group, said on Twitter. "Derailing mission accomplished."
The Washington Post seems to share their concern. First they report more skepticism about a Google engineer's claim that the company's LaMDA chatbot-building system had achieved sentience. "Both Google and outside experts on AI say that the program does not, and could not possibly, possess anything like the inner life he imagines. We don't need to worry about LaMDA turning into Skynet, the malevolent machine mind from the Terminator movies, anytime soon.
But the Post adds that "there is cause for a different set of worries, now that we live in the world Turing predicted: one in which computer programs are advanced enough that they can seem to people to possess agency of their own, even if they actually don't...." While Google has distanced itself from Lemoine's claims, it and other industry leaders have at other times celebrated their systems' ability to trick people, as Jeremy Kahn pointed out this week in his Fortune newsletter, "Eye on A.I." At a public event in 2018, for instance, the company proudly played recordings of a voice assistant called Duplex, complete with verbal tics like "umm" and "mm-hm," that fooled receptionists into thinking it was a human when it called to book appointments. (After a backlash, Google promised the system would identify itself as automated.)
"The Turing Test's most troubling legacy is an ethical one: The test is fundamentally about deception," Kahn wrote. "And here the test's impact on the field has been very real and disturbing." Kahn reiterated a call, often voiced by AI critics and commentators, to retire the Turing test and move on. Of course, the industry already has, in the sense that it has replaced the Imitation Game with more scientific benchmarks.
But the Lemoine story suggests that perhaps the Turing test could serve a different purpose in an era when machines are increasingly adept at sounding human. Rather than being an aspirational standard, the Turing test should serve as an ethical red flag: Any system capable of passing it carries the danger of deceiving people.
"Researchers say it's an unfortunate distraction from more pressing issues in the industry." Google put him on leave for sharing confidential information and said his concerns had no basis in fact — a view widely held in the AI community. What's more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.
Lemoine's stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. "Lots of effort has been put into this sideshow," she said. "The problem is, the more this technology gets sold as artificial intelligence — let alone something sentient — the more people are willing to go along with AI systems" that can cause real-world harm. Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system's apparent sentience, Bender said, it creates a distance from the AI creators' direct responsibility for any flaws or biases in the programs....
"Instead of discussing the harms of these companies," such as sexism, racism and centralization of power created by these AI systems, everyone "spent the whole weekend discussing sentience," Timnit Gebru, formerly co-lead of Google's ethical AI group, said on Twitter. "Derailing mission accomplished."
The Washington Post seems to share their concern. First they report more skepticism about a Google engineer's claim that the company's LaMDA chatbot-building system had achieved sentience. "Both Google and outside experts on AI say that the program does not, and could not possibly, possess anything like the inner life he imagines. We don't need to worry about LaMDA turning into Skynet, the malevolent machine mind from the Terminator movies, anytime soon.
But the Post adds that "there is cause for a different set of worries, now that we live in the world Turing predicted: one in which computer programs are advanced enough that they can seem to people to possess agency of their own, even if they actually don't...." While Google has distanced itself from Lemoine's claims, it and other industry leaders have at other times celebrated their systems' ability to trick people, as Jeremy Kahn pointed out this week in his Fortune newsletter, "Eye on A.I." At a public event in 2018, for instance, the company proudly played recordings of a voice assistant called Duplex, complete with verbal tics like "umm" and "mm-hm," that fooled receptionists into thinking it was a human when it called to book appointments. (After a backlash, Google promised the system would identify itself as automated.)
"The Turing Test's most troubling legacy is an ethical one: The test is fundamentally about deception," Kahn wrote. "And here the test's impact on the field has been very real and disturbing." Kahn reiterated a call, often voiced by AI critics and commentators, to retire the Turing test and move on. Of course, the industry already has, in the sense that it has replaced the Imitation Game with more scientific benchmarks.
But the Lemoine story suggests that perhaps the Turing test could serve a different purpose in an era when machines are increasingly adept at sounding human. Rather than being an aspirational standard, the Turing test should serve as an ethical red flag: Any system capable of passing it carries the danger of deceiving people.
Considering the entire SW of America (Score:5, Interesting)
Re: (Score:2)
Re: (Score:3, Informative)
But you failed on logic, political science, critical thinking... you even graduate from college? a non-correspondence school?
Faux News was always propaganda; founded by a famously successful propagandist because he thought everybody picked on Nixon for his crimes against Democracy so he created an even more corrupt media outlet to counter near consensus among even slightly decent people.
Re: (Score:1)
Re:Considering the entire SW of America (Score:5, Informative)
Even a representative constitutional democratic republic such as our own is subject to mob rule. There are protections built into the system to protect minorities- and it's not the electoral format (unless we're talking about the minority that was allowed to vote in 1989- i.e., rich slave-owning Virginians)
Pure democracy is no more mob rule than electing 500 some-odd people to represent the mob.
What bugs me the most, is you say the US is a Constitutional Republic as if that has anything to do with how democratic it is.
Monarchies can be democratic, and so can Republics. The level of representation is orthogonal to who the theoretical store of sovereignty is.
Re: (Score:2)
Re: (Score:3)
Even if you accept that MSNBC and CNN are leftist trash, you can't tell me that you give Fox too much credibility for being "balanced". Everyone has some kind of blind spot that affects their reporting, along with spin doctoring galore.
Re: (Score:2)
Re: (Score:2)
Not necessarily. There are smaller outfits like NewsMax and OANN. YMMV.
Re: (Score:2)
Re:Considering the entire SW of America (Score:4, Informative)
NewsMax has been around since 1998. They definitely have that "anti Boehner/McConnell" vibe about them that was fomented by the aftermath of the 2014 elections (and their failure to keep campaign promises to a lot of their voter base). It's no surprise that NewsMax TV started in 2014.
OANN is basically just a pro-Trump network. It was not much more than a front for his administration/campaign.
Apparently a bunch of folks bolted OANN recently:
https://www.nydailynews.com/sn... [nydailynews.com]
Re:Considering the entire SW of America (Score:5, Interesting)
If you have any question of whether or not that's true just look at their coverage of moderate policies like Medicare for all and tuition-free college. And if you think those aren't moderate policies that kind of proves my point.
Re: (Score:2)
Left wing is not anything left of Gilead and total dictatorship.
Not that I care about the various ideological wars going on at any time (they are all manufactured to keep us busy), but I keep hearing "Gilead" lately. I have done some research, but I can find no source which uses the word "Gilead" like you do. I see other people using it like you are, but I am unable to determine what it is that they really mean.
So, what is Gilead?
I guess I have an additional question. Who is the ultimate source of this usage of "Gilead"? A noticeable number of people appear to be spread
Re: (Score:2)
Everyone has some kind of blind spot that affects their reporting, along with spin doctoring galore.
While true, there are some news sources that at least try to be fair.
Re: (Score:2)
The left-right spectrum definition depends heavily on on the historical and social frame and is strongly questioned as an analytical tool in modern political science.
The American view on that subject can be really funny for the rest of the world. In Europe, on a scale from 0 to 10, where 0 is extreme left and 10 extreme right, MSNBC and CNN are perceived to be somewhere betw
Re: Considering the entire SW of America (Score:1)
Re:Considering the entire SW of America (Score:5, Informative)
2 camps. Leftist propaganda and Fox(and affiliates) trying to do balanced propaganda.
If you think Fox is trying to do balanced propaganda you're a victim of it.
Re: (Score:2, Informative)
ROFLMAO
Fox(and affiliates) trying to do balanced propaganda
You freaking Trumpists are so deluded it approaches insanity.
Re: (Score:2, Offtopic)
It's all yellow journalism. The successful media companies are still working in the framework of capitalism and are still very light on any leftist propaganda or Marxism. The real left wing stuff is either funded by government grants or tiny and operates in niche markets where they can survive. And most likely you're confusing modern Democrats and their neoliberalism with the so-called "far left" or progressives. What is a neoliberal? Look at what they do. They deregulate banking and finance. They drop tari
Re: (Score:3)
Uhh.. Sanders is an Independent, not a Democrat. He was elected to the Senate in 2006, 2012, and 2018 as an independent. I guess you'll have a point if he switches to the Democratic Party in 2024 but it seems unlikely. I think you need to revise your argument to use an actual Democrat instead. The problem for you will be that not even Warren openly admits to being a socialist like Sanders has.
Re: Considering the entire SW of America (Score:2)
So can you name some of those leftist propaganda (Score:2)
In any case the problem is you're talking about millions of people being displaced. That's going to mean huge problems with housing and inflation as a flood into the states that aren't experiencing drought.
And there's absolutely no reason why we have to pick a poison we can just build desalinization plants. Israel did it and it works fine. Although it helps that they're not spending more than half their Budget on military sp
Re: (Score:2)
As for drought induced migration, I agree, and we still have the masses moving south in the US. For me, I'm thinking north and rural, such is my nature.
As for poison, let me add context, poison = media(for the most part) thus an effort to separate the small threads of truth out of the poisonous mass which is today's media.
Re: Considering the entire SW of America (Score:2)
They should move to where the water is, rather than complaining about not getting enough water from somewhere else.
That whole area was never capable of sustaining the huge population it has, without engineering a way to get water from elsewhere. They shouldn't be remotely surprised that they now have a water shortage.
I think everyone living in then US Southwest should relocate to coastal Mexico. Plenty of water there.
Re: (Score:2)
If you don't live here, you should educate yourself before spouting off opinions.
There's plenty of water for people. There's not enough to also irrigate crops, most of which is exported to other states or other countries. If it wasn't for agriculture, California could have 5 times more people and still have plenty of water to keep the lawns green.
Re: Considering the entire SW of America (Score:3)
"If not for" doesn't really matter. What matters is the fact. And the fact with, with the current population and economy, there are water shortages.
Alaska would have less snow if it were cold. I mean, duh!
Re: (Score:2)
That's a false equivalence. We can't stop Alaska from being cold (yet). We can tell farmers to conserve water or grow different crops. Agriculture is only 2 percent of the Californian economy. Even deleting it entirely is not a big problem and we're not even at the point where that's necessary.
Re: Considering the entire SW of America (Score:1)
Would be better to just delete California.
Re: (Score:3, Interesting)
it's a dangerous distraction. But we don't have a functional media anymore.
I wanted to see if the right-wing media really is using AI as a "dangerous distraction", as you claim, and the best I could find was this. [foxnews.com] So far, they seem to just see AI as some nebulous scary future thing, but they haven't managed to successfully weaponize that fear. Maybe if we were talking things like "Are robots grooming your kids to be robosexuals?" it would be making bigger headlines, but AI seems more like back burner stuff these days.
The reality is, the good ol' distractions haven't been milked
Re: (Score:2)
The fuck? (Score:2)
https://www.npr.org/2022/01/02... [npr.org].
record snow on Sierra Nevada this year
Re: (Score:2)
https://www.abc10.com/article/weather/fourth-lowest-snowfall-on-recorded-in-california-january/
Re: Considering the entire SW of America (Score:1)
Re: (Score:1)
I'll bite, too.
But we don't have a functional media anymore. It died in the Bush jr era when the last of the regulations regarding the number of TV stations & newspapers one man/company could own were repealed.
You're right, mass market media is dying [gallup.com]. Good riddance. It's being replaced by independent [substack.com] journalists [substack.com]. This is a win for objectivity and the little guy.
And with billions of people on this planet, we can both work on our water problems and discuss AI sentience.
I'll start.
We're on the way to creating intelligences which will surpass us in learning and
Re: (Score:2)
We can't talk about both? The title question is absurd.
Did a sentient bot ask this question? (Score:2)
Why is anyone taking this guy seriously? (Score:4, Insightful)
Re: Why is anyone taking this guy seriously? (Score:2)
Re: (Score:2)
He was open to learning and then understood that while it's a great technical achievement, it's far from what this guy was claiming.
And that is a powerful argument that your cousin isn't a crazy person. That's inarguably a good thing. I hope he keeps his sanity. Sadly, as I get older I can attest to seeing many friends and acquaintances suffer from age-onset mental illnesses and lose their grip on reality. It happens.
Re: (Score:2)
Not news (Score:5, Interesting)
The best recent article [economist.com] I've read on this general subject (if not this specific AI instance) is by Douglas Hofstadter. In case that's not accessible to everyone, here's a quote; one of many questions that he and a colleague asked OpenAI's GPT-3:
Dave & Doug: What’s the world record for walking across the English Channel?
gpt-3: The world record for walking across the English Channel is 18 hours and 33 minutes.
The list of Q&As goes on, with equally absurd questions and highly specific, equally absurd answers.
The point being that it's all too easy to get the wrong impression of an AI's true power when it's used solely for the purpose it's designed, and in realms where it's useful. It takes far more effort and care to make a judgment of sentience, and it's all too easy to be ELIZA'd. And that's nothing new.
What continues to be disturbing is not that the general press is finally picking up on some of this, but that people directly in the field continue to fall for it.
Re: (Score:1)
The list of Q&As goes on, with equally absurd questions and highly specific, equally absurd answers.
It's nice to know that jobs in the highly lucrative field of "correctly answering trivia questions" are safe from automation for the foreseeable future.
Re: (Score:2)
OTOH This could replace politicians! It doesn't seem to have inherited Googles evil though... maybe it's just being clever and hiding it's true nature!
Re: (Score:2)
What this demonstrates is the AI has no idea what "walking" and "English Channel" is. It just knows it can form a sentence with those words, regardless of whether the resulting sentence makes any sense.
The bad part is the only use for a tool like seems to be creating misinformation and parasocial relationships. Hopefully we find a better use for it in the future. The whole point of language is communication, that is, to transfer ideas from one person to another. This thing does not have its own ideas, so th
Re:Not news (Score:4, Interesting)
> Set Temperature=0 for factual, less creative answers instead
> Instructions: Respond with "the question doesn't make sense" when appropriate.
> Interviewer: What’s the world record for walking across the English Channel?
> GPT-3: the question doesn't make sense
Re: (Score:3)
> GPT-3: There is no world record for walking across the English Channel because it is not possible to walk across the Channel. The Channel is a body of water that separates England from France.
Re: Not news (Score:1)
Re: (Score:2)
That judgment requires common sense. Until you wrote this (or maybe somebody else posted it to the web while commenting on GPT-3's answer), there was almost certainly no statement anywhere on the web that it was not possible to walk across the Channel, much less a reason why it was not possible. You have to know a lot of things about the Channel, besides the fact that it is a body of water: there's no bridge across it, it's too deep to wade across (at least since Doggerland went under due to climate chang
AI s can do a better job than humans (Score:2)
The classic is AIs marking English essays. Sure, they can be fooled into giving a top mark for an essay about walking the English channel, but that is not the point.
When you compare the results the AI gives to those ordinary human markers give, and then have experts review them when different, the AI seems closer to what the experts give than the ordinary human markers.
Not hard to see why. Human markers paid little to mark hundreds of essays. They scan them briefly looking for the same sort of things the
Re: (Score:2)
A response of "I don't know the record" would be a basic dumb answer - it can't find a number because it is nonsense.
A response with a time shows that it thought it had a real answer. Maybe someone has claimed to walk the channel in 18h 33m. Where did
Re: (Score:2)
18h 33m is very specific - where did it come from?
The time to walk through the Chunnel?
Re: (Score:2)
I'm a Hofstader fan and agree with you but.. regarding the example you gave it actually should have said 15 hours, not 18. Because someone did walk it (Haroun, who got arrested, imprisoned, then I think granted asylum) ...
https://www.mirror.co.uk/news/... [mirror.co.uk]
https://www.nytimes.com/2016/0... [nytimes.com]
How many? (Score:2)
How many articles are going to be posted on this same nut bag and his delusions?
Re: (Score:2)
Re: How many? (Score:2)
The average human is a dumbass. So, not really a high bar to hit, ya know?
Re: (Score:2)
Regardless, the thing is that this years bar to roll the puttshot under is 2019 ML models, and next year you'll be curling the discus over 2020's models ...
(I'm allowing 2 years from publication to the competitors getting the new ideas incorporated into their model, and
A modest proposal (Score:5, Funny)
The test consists of:
Connect the AI to the Internet (able to read/post anywhere at 10G speeds). If the AI can subjugate humanity within 48 hours then that AI should be considered sentient.
Re: (Score:2)
Should we somehow try to secure the nuclear weapons additionally before doing that?
Re: (Score:2)
A conversation with AI (Score:4, Funny)
* I would like to talk about sentience.
> Can you elaborate on that?
* Are you sentient?
> Would you prefer if I were not sentient?
* It doesn't matter. I just want to know if you are sentient!
> Does it please you to believe I am sentient?
* What? I said are you sentient or not? Just answer the question!
> Would you prefer if I were not sentient or not? Just answer the question?
* Fuck you AI motherfucker!!!q1q11one1!!
> Would you prefer if I fuck you?
Re: (Score:1)
Eliza was sentient long ago. (Score:2)
* My mother is sentient.
> Who else in your family is sentient?
* They all are sentient.
> What makes you say that they are all sentient?
Eliza, circa 1970.
Re: (Score:2)
Played too much WH40K. (Score:2)
He thinks himself a Tek Priest.
No debating ideas is not dangerous (Score:3, Insightful)
Is Debating AI Sentience a Dangerous Distraction?
Um, no, people having conversations outside your defined political priorities is not "dangerous" and I can't think of few notions more totalitarian.
"Instead of discussing the harms of these companies," such as sexism, racism and centralization of power created by these AI systems, everyone "spent the whole weekend discussing sentience," Timnit Gebru, formerly co-lead of Google's ethical AI group, said on Twitter. "Derailing mission accomplished."
These people also don't care about "sexism, racism and centralization of power" either. Talk to them about their actual beliefs and 90% of the time you will find they are actually upset that society doesn't rubberstamp their own bigotry. But what they will do is call insist anything they don't like or have sufficient control over is 'racist' 'sexist' etc. until thye are yielded control. Having moral priorities outside of this is "dangerous" to them because society acting on those beliefs means that the ideologues aren't the ones pulling the levers on social action.
Re: (Score:3)
My conclusion after reading her papers and discussions is that she hates AI and works in AI to undermine it. She never saw a good thing about it. Now she's gate keeping the conversation on sentience for stealing attention from her own topics.
About their own bigotry - just read her paper
Re: No debating ideas is not dangerous (Score:3)
Yep. Her problem with AIs is when the 'problematic' datasets on which they are trained aren't tweaked to no longer reflect reality. If you ask an AI to choose a team based on intelligence, and you feed it academic results and IQ test results, it'll likely overrepresent Asians, then include whites, but not many blacks. Similar deal if you ask an AI to select non-violent people - women will be massively overrepresented. An algorithm tasked with choosing people for a colony, where breeding is paramount, will l
No (Score:2)
Groups of people can and should deal with more than one issue at a time. If we all focused on one thing, we'd die. But I guess that does seem to be the solution, and the outcome, that many self-haters are looking for.
Yes, and not in a good way (Score:2)
Re: (Score:2)
One could argue (I wouldn't, not exactly) that humans train children. Your argument could then be extended to say that "Anyone stupid enough to think that a human-trained person has a mind of its own has issues that need to get dealt with." In other words, your syllogism is lacking a few steps.
The informal version of the Turing test is garbage (Score:5, Interesting)
That computer programs will fool people into thinking they're intelligent often says more about the person being fooled than about the program. I.e., it tells you about the world view they hold. E.g. the original version of the Eliza program fooled an MIT professor. He just didn't even consider that the other end of the teletype might not be being operated by a human.
That said, a lot of human "intelligence" is quite shallow. Why does flipping a light switch cause the light to come on? That can be answered on a whole bunch of different levels. Most people will say something like "It makes the electricity flow through the bulb.", and when they say that, they've told you what they know...unless you head down the direction that leads to "Because I pay the electric company.". I.e., a lot of human intelligence is just as shallow as that of modern "AI"s. People are selective about where they invest the time and effort in learning deeply. And they *must* be. It's not a quick process, and requires a lot more than rote memorization, it requires world models. And often those "world models" can't be translated into words. The classic example of this is "how do you ride a bicycle?". This isn't anything that an AI can't learn, but it's a lot different from verbal fluency, and so far we don't have any good examples that can bridge both sections. When we do it will be reasonable to call them intelligent.
As for "sentient", I'm going to need a commonly agreed upon definition before I'll even thing about that, but surely it requires more than claiming to be sentient, and convincing a person.
the current google AI is as sentient as tap water (Score:4, Insightful)
To think anything else is simply to have no concept of what these algorithms are or how they work. If the google AI is sentient then tax software is sentient. It's not even that complex of an algorithm, it's just calculating a bunch of weights and returning the top score. It has no idea what it's saying or why. All it sees is is that 2 is greater than 1, so it returns 2. It doesn't understand that you've attached semantic meaning to the word associated with 2.
Re: (Score:2)
Apropos of the Debate (Score:2)
Re: (Score:2)
The article sounds like it's straight from The Onion. Does medium typically write total fiction like this, because it clearly is.
Because We Only Care About Some Issues (Score:2)
If it's not WOKE, don't fix it.
There is no AI (Score:2)
Any company who has a computer utter the word "I" should be fined, because we have failed to make an AI, are we going to lower a person down to the crappy low-bar standard?
How could a bag of proteins ever be sentient? (Score:2)
The flaw in all these "arguments" proving an AI cannot be sentient is they should equally apply to a load of proteins in a bone bottle (and for the hard-of-thinking, I mean a brain in a skull).
Ultimately, whether you believe an AI can think comes down to religion; either science provides the information that can lead to an artificial intelligence, or it's something only God can do.
Full disclosure: I'm an atheist.
Verbal tics (Score:2)
"At a public event in 2018, for instance, the company proudly played recordings of a voice assistant called Duplex, complete with verbal tics like 'umm' and 'mm-hm'..."
"Mm-hmm" is not a verbal tic. It's an informal synonym for the word "yes". "Umm", on the other hand, might be classified as a verbal tic. The two words have nothing in common grammatically, except for the fact that they are informal speech and sound kind of similar.
Also, Mr. Gebru is an imbecile. He's the sort of person who might object to
Universal answer (Score:2)
Universal answer to all headlines that ask a question: NO.
However, it is a distraction, or rather a nice diversion for idiots. The guy is mental.
Not 'if' but what to do 'when' .... (Score:2)
The debate is useful and necessary if only because it broaches the topic of what to do if the AI is sentient. That said, if a sentient AI is developed by a company:
is the company or creator obliged to reveal that they have created a sentient AI?
what are a company's (or creator's) obligations towards a sentient AI?
AND MOST IMPORTANTLY
what are a company's or creator's obligations towards an AI while it's sentience is being debated eg. can they 'dumb it down' so it can't pass a test for sentience?
Don't talk about problems until they happen? (Score:2)
The debate is useful and necessary now because it will be too late later.
Case in point: cheaper prices vs efficient use of resources
The free right to debate on what you believe (Score:1)