Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

Is Debating AI Sentience a Dangerous Distraction? (msn.com) 96

"A Google software engineer was suspended after going public with his claims of encountering 'sentient' artificial intelligence on the company's servers," writes Bloomberg, "spurring a debate about how and whether AI can achieve consciousness."

"Researchers say it's an unfortunate distraction from more pressing issues in the industry." Google put him on leave for sharing confidential information and said his concerns had no basis in fact — a view widely held in the AI community. What's more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.

Lemoine's stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. "Lots of effort has been put into this sideshow," she said. "The problem is, the more this technology gets sold as artificial intelligence — let alone something sentient — the more people are willing to go along with AI systems" that can cause real-world harm. Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system's apparent sentience, Bender said, it creates a distance from the AI creators' direct responsibility for any flaws or biases in the programs....

"Instead of discussing the harms of these companies," such as sexism, racism and centralization of power created by these AI systems, everyone "spent the whole weekend discussing sentience," Timnit Gebru, formerly co-lead of Google's ethical AI group, said on Twitter. "Derailing mission accomplished."

The Washington Post seems to share their concern. First they report more skepticism about a Google engineer's claim that the company's LaMDA chatbot-building system had achieved sentience. "Both Google and outside experts on AI say that the program does not, and could not possibly, possess anything like the inner life he imagines. We don't need to worry about LaMDA turning into Skynet, the malevolent machine mind from the Terminator movies, anytime soon.

But the Post adds that "there is cause for a different set of worries, now that we live in the world Turing predicted: one in which computer programs are advanced enough that they can seem to people to possess agency of their own, even if they actually don't...." While Google has distanced itself from Lemoine's claims, it and other industry leaders have at other times celebrated their systems' ability to trick people, as Jeremy Kahn pointed out this week in his Fortune newsletter, "Eye on A.I." At a public event in 2018, for instance, the company proudly played recordings of a voice assistant called Duplex, complete with verbal tics like "umm" and "mm-hm," that fooled receptionists into thinking it was a human when it called to book appointments. (After a backlash, Google promised the system would identify itself as automated.)

"The Turing Test's most troubling legacy is an ethical one: The test is fundamentally about deception," Kahn wrote. "And here the test's impact on the field has been very real and disturbing." Kahn reiterated a call, often voiced by AI critics and commentators, to retire the Turing test and move on. Of course, the industry already has, in the sense that it has replaced the Imitation Game with more scientific benchmarks.

But the Lemoine story suggests that perhaps the Turing test could serve a different purpose in an era when machines are increasingly adept at sounding human. Rather than being an aspirational standard, the Turing test should serve as an ethical red flag: Any system capable of passing it carries the danger of deceiving people.

This discussion has been archived. No new comments can be posted.

Is Debating AI Sentience a Dangerous Distraction?

Comments Filter:
  • by rsilvergun ( 571051 ) on Saturday June 18, 2022 @08:13PM (#62632466)
    is about to run out of water and we're doing fuck all about it (except for billionaires, who are hurriedly buying up all the water rights they can) yeah, it's a dangerous distraction. But we don't have a functional media anymore. It died in the Bush jr era when the last of the regulations regarding the number of TV stations & newspapers one man/company could own were repealed.
    • They should move to where the water is, rather than complaining about not getting enough water from somewhere else.

      That whole area was never capable of sustaining the huge population it has, without engineering a way to get water from elsewhere. They shouldn't be remotely surprised that they now have a water shortage.

      I think everyone living in then US Southwest should relocate to coastal Mexico. Plenty of water there.

      • by djinn6 ( 1868030 )

        If you don't live here, you should educate yourself before spouting off opinions.

        There's plenty of water for people. There's not enough to also irrigate crops, most of which is exported to other states or other countries. If it wasn't for agriculture, California could have 5 times more people and still have plenty of water to keep the lawns green.

    • Re: (Score:3, Interesting)

      by Powercntrl ( 458442 )

      it's a dangerous distraction. But we don't have a functional media anymore.

      I wanted to see if the right-wing media really is using AI as a "dangerous distraction", as you claim, and the best I could find was this. [foxnews.com] So far, they seem to just see AI as some nebulous scary future thing, but they haven't managed to successfully weaponize that fear. Maybe if we were talking things like "Are robots grooming your kids to be robosexuals?" it would be making bigger headlines, but AI seems more like back burner stuff these days.

      The reality is, the good ol' distractions haven't been milked

    • by Kremmy ( 793693 )
      The whole region has been sucking water from the North the whole time. They were entirely reliant on it. A few years back there was a great article where farmers were complaining that the quality of the water they were receiving was no longer good enough. They've drained reservoirs across California and the ice melt that used to provide a lot of water is gone. Those mountains barely have snow caps anymore. The whole state is in a bad way and those who leech the water are entitled as hell.
    • Lots of people are working on making desalination practical [sciencedirect.com]. Energy is needed for desalination solutions. Renewable energy let's us fix this problem without making global warming worse, so we should all get behind those working to accelerate the world's transition to sustainable energy. [tesla.com]
    • I don't know how you do it; you just distracted people from discussing the subject at hand with a couple of completely unrelated issues.

      I'll bite, too.

      But we don't have a functional media anymore. It died in the Bush jr era when the last of the regulations regarding the number of TV stations & newspapers one man/company could own were repealed.

      You're right, mass market media is dying [gallup.com]. Good riddance. It's being replaced by independent [substack.com] journalists [substack.com]. This is a win for objectivity and the little guy.

      And with billions of people on this planet, we can both work on our water problems and discuss AI sentience.

      I'll start.

      We're on the way to creating intelligences which will surpass us in learning and

    • by MrL0G1C ( 867445 )

      We can't talk about both? The title question is absurd.

  • by divide overflow ( 599608 ) on Saturday June 18, 2022 @08:36PM (#62632488)
    It isn't as if there aren't enough crazy people in this world to waste our attention on.
    • Shockingly enough, there are a lot of very intelligent people who excel in their fields but know nothing about how these ML models are put together. To them it is something that is plausibly true. A personal example is a cousin of mine who is a technical finance guy that brought this up. I showed him two articles that explained why it is not sentient or artificial general intelligence. He was open to learning and then understood that while it's a great technical achievement, it's far from what this guy was
      • He was open to learning and then understood that while it's a great technical achievement, it's far from what this guy was claiming.

        And that is a powerful argument that your cousin isn't a crazy person. That's inarguably a good thing. I hope he keeps his sanity. Sadly, as I get older I can attest to seeing many friends and acquaintances suffer from age-onset mental illnesses and lose their grip on reality. It happens.

    • This "Crazy Guy" has started an excellent discussion. Will some future AI be sentient or just appear sentient to "the crazy public." Will crazy people do as the AI commands. Many people have been very successful faking sentience. It appears we need a better measure of wisdom that somehow doesn't exclude 60% of the human population. "I'm sorry but you can't vote. You have scored as non-sentient."
  • Not news (Score:5, Interesting)

    by Flexagon ( 740643 ) on Saturday June 18, 2022 @08:38PM (#62632492)

    The best recent article [economist.com] I've read on this general subject (if not this specific AI instance) is by Douglas Hofstadter. In case that's not accessible to everyone, here's a quote; one of many questions that he and a colleague asked OpenAI's GPT-3:

    Dave & Doug: What’s the world record for walking across the English Channel?

    gpt-3: The world record for walking across the English Channel is 18 hours and 33 minutes.

    The list of Q&As goes on, with equally absurd questions and highly specific, equally absurd answers.

    The point being that it's all too easy to get the wrong impression of an AI's true power when it's used solely for the purpose it's designed, and in realms where it's useful. It takes far more effort and care to make a judgment of sentience, and it's all too easy to be ELIZA'd. And that's nothing new.

    What continues to be disturbing is not that the general press is finally picking up on some of this, but that people directly in the field continue to fall for it.

    • The list of Q&As goes on, with equally absurd questions and highly specific, equally absurd answers.

      It's nice to know that jobs in the highly lucrative field of "correctly answering trivia questions" are safe from automation for the foreseeable future.

      • by MrL0G1C ( 867445 )

        OTOH This could replace politicians! It doesn't seem to have inherited Googles evil though... maybe it's just being clever and hiding it's true nature!

      • by djinn6 ( 1868030 )

        What this demonstrates is the AI has no idea what "walking" and "English Channel" is. It just knows it can form a sentence with those words, regardless of whether the resulting sentence makes any sense.

        The bad part is the only use for a tool like seems to be creating misinformation and parasocial relationships. Hopefully we find a better use for it in the future. The whole point of language is communication, that is, to transfer ideas from one person to another. This thing does not have its own ideas, so th

    • Re:Not news (Score:4, Interesting)

      by Visarga ( 1071662 ) on Sunday June 19, 2022 @01:08AM (#62632778)
      These language models can often do a task much better if they are asked with different words, or the task is described with a few well chosen examples. For example the "walking the English Channel record" could have been solved if the prompt contained the suggestion "Respond with 'this is nonsense' when the question doesn't make sense". Otherwise it might assume the purpose is to make fun answers.

      > Set Temperature=0 for factual, less creative answers instead

      > Instructions: Respond with "the question doesn't make sense" when appropriate.

      > Interviewer: What’s the world record for walking across the English Channel?

      > GPT-3: the question doesn't make sense
      • > Interviewer: Why doesn't it make sense?

        > GPT-3: There is no world record for walking across the English Channel because it is not possible to walk across the Channel. The Channel is a body of water that separates England from France.
        • You're assuming the AI isn't religious.
        • That judgment requires common sense. Until you wrote this (or maybe somebody else posted it to the web while commenting on GPT-3's answer), there was almost certainly no statement anywhere on the web that it was not possible to walk across the Channel, much less a reason why it was not possible. You have to know a lot of things about the Channel, besides the fact that it is a body of water: there's no bridge across it, it's too deep to wade across (at least since Doggerland went under due to climate chang

          • The classic is AIs marking English essays. Sure, they can be fooled into giving a top mark for an essay about walking the English channel, but that is not the point.

            When you compare the results the AI gives to those ordinary human markers give, and then have experts review them when different, the AI seems closer to what the experts give than the ordinary human markers.

            Not hard to see why. Human markers paid little to mark hundreds of essays. They scan them briefly looking for the same sort of things the

    • by OldBus ( 596183 )
      What is interesting to me about that exchange, even more than the obvious of the AI engine did not have enough infomation to realise it was a bizarre question to ask, is that it responded with an answer! 18h 33m is very specific - where did it come from?

      A response of "I don't know the record" would be a basic dumb answer - it can't find a number because it is nonsense.

      A response with a time shows that it thought it had a real answer. Maybe someone has claimed to walk the channel in 18h 33m. Where did

    • by mattr ( 78516 )

      I'm a Hofstader fan and agree with you but.. regarding the example you gave it actually should have said 15 hours, not 18. Because someone did walk it (Haroun, who got arrested, imprisoned, then I think granted asylum) ...
      https://www.mirror.co.uk/news/... [mirror.co.uk]
      https://www.nytimes.com/2016/0... [nytimes.com]

  • How many articles are going to be posted on this same nut bag and his delusions?

    • AI sentience is a good topic this year. AIs can write, draw, code and play games better than average humans. The pace of improvement since 2019 has been amazing, we didn't expect to be so far ahead.
      • The average human is a dumbass. So, not really a high bar to hit, ya know?

        • I thought the normal thing was to clear the bar, not hit it? But I don't follow sports. Maybe you're talking about hitting the bar with the javelin? Then catching the bar. Which sounds a lot better than trying to catch the javelin directly.

          Regardless, the thing is that this years bar to roll the puttshot under is 2019 ML models, and next year you'll be curling the discus over 2020's models ...

          (I'm allowing 2 years from publication to the competitors getting the new ideas incorporated into their model, and

  • by oumuamua ( 6173784 ) on Saturday June 18, 2022 @09:04PM (#62632534)
    I Propose a New Turing Test!
    The test consists of:
    Connect the AI to the Internet (able to read/post anywhere at 10G speeds). If the AI can subjugate humanity within 48 hours then that AI should be considered sentient.
  • by backslashdot ( 95548 ) on Saturday June 18, 2022 @09:25PM (#62632562)

    * I would like to talk about sentience.
    > Can you elaborate on that?
    * Are you sentient?
    > Would you prefer if I were not sentient?
    * It doesn't matter. I just want to know if you are sentient!
    > Does it please you to believe I am sentient?
    * What? I said are you sentient or not? Just answer the question!
    > Would you prefer if I were not sentient or not? Just answer the question?
    * Fuck you AI motherfucker!!!q1q11one1!!
    > Would you prefer if I fuck you?

  • He thinks himself a Tek Priest.

  • by physicsphairy ( 720718 ) on Saturday June 18, 2022 @10:06PM (#62632620)

    Is Debating AI Sentience a Dangerous Distraction?

    Um, no, people having conversations outside your defined political priorities is not "dangerous" and I can't think of few notions more totalitarian.

    "Instead of discussing the harms of these companies," such as sexism, racism and centralization of power created by these AI systems, everyone "spent the whole weekend discussing sentience," Timnit Gebru, formerly co-lead of Google's ethical AI group, said on Twitter. "Derailing mission accomplished."

    These people also don't care about "sexism, racism and centralization of power" either. Talk to them about their actual beliefs and 90% of the time you will find they are actually upset that society doesn't rubberstamp their own bigotry. But what they will do is call insist anything they don't like or have sufficient control over is 'racist' 'sexist' etc. until thye are yielded control. Having moral priorities outside of this is "dangerous" to them because society acting on those beliefs means that the ideologues aren't the ones pulling the levers on social action.

    • In 2019 Timnit Gebru was a Google scientist working on ethical AI. She denounced the company and the Ai systems as being biased, consuming too much electricity and being trained on toxic data scraped from the internet.

      My conclusion after reading her papers and discussions is that she hates AI and works in AI to undermine it. She never saw a good thing about it. Now she's gate keeping the conversation on sentience for stealing attention from her own topics.

      About their own bigotry - just read her paper
      • Yep. Her problem with AIs is when the 'problematic' datasets on which they are trained aren't tweaked to no longer reflect reality. If you ask an AI to choose a team based on intelligence, and you feed it academic results and IQ test results, it'll likely overrepresent Asians, then include whites, but not many blacks. Similar deal if you ask an AI to select non-violent people - women will be massively overrepresented. An algorithm tasked with choosing people for a colony, where breeding is paramount, will l

  • Groups of people can and should deal with more than one issue at a time. If we all focused on one thing, we'd die. But I guess that does seem to be the solution, and the outcome, that many self-haters are looking for.

  • Anyone stupid enough to think that a human-trained algorithm has a mind of its own has issues that need to get dealt with. And then the worst of it is that this AI crap is such garbage - anyone who buys into it needs to also buy swampland in Florida. Oh wait, tons of American companies are buying into the Google Cloud crap, or the AWS BS. Or the psycho Elon Musk BS. Done with it
    • One could argue (I wouldn't, not exactly) that humans train children. Your argument could then be extended to say that "Anyone stupid enough to think that a human-trained person has a mind of its own has issues that need to get dealt with." In other words, your syllogism is lacking a few steps.

  • That computer programs will fool people into thinking they're intelligent often says more about the person being fooled than about the program. I.e., it tells you about the world view they hold. E.g. the original version of the Eliza program fooled an MIT professor. He just didn't even consider that the other end of the teletype might not be being operated by a human.

    That said, a lot of human "intelligence" is quite shallow. Why does flipping a light switch cause the light to come on? That can be answered on a whole bunch of different levels. Most people will say something like "It makes the electricity flow through the bulb.", and when they say that, they've told you what they know...unless you head down the direction that leads to "Because I pay the electric company.". I.e., a lot of human intelligence is just as shallow as that of modern "AI"s. People are selective about where they invest the time and effort in learning deeply. And they *must* be. It's not a quick process, and requires a lot more than rote memorization, it requires world models. And often those "world models" can't be translated into words. The classic example of this is "how do you ride a bicycle?". This isn't anything that an AI can't learn, but it's a lot different from verbal fluency, and so far we don't have any good examples that can bridge both sections. When we do it will be reasonable to call them intelligent.

    As for "sentient", I'm going to need a commonly agreed upon definition before I'll even thing about that, but surely it requires more than claiming to be sentient, and convincing a person.

  • by clambake ( 37702 ) on Sunday June 19, 2022 @12:37AM (#62632736) Homepage

    To think anything else is simply to have no concept of what these algorithms are or how they work. If the google AI is sentient then tax software is sentient. It's not even that complex of an algorithm, it's just calculating a bunch of weights and returning the top score. It has no idea what it's saying or why. All it sees is is that 2 is greater than 1, so it returns 2. It doesn't understand that you've attached semantic meaning to the word associated with 2.

    • by dcw3 ( 649211 )

      The article sounds like it's straight from The Onion. Does medium typically write total fiction like this, because it clearly is.

  • If it's not WOKE, don't fix it.

  • A high-level program that does exactly what it's programmed to do, using sampled data--is not an intelligence.
    Any company who has a computer utter the word "I" should be fined, because we have failed to make an AI, are we going to lower a person down to the crappy low-bar standard?
  • The flaw in all these "arguments" proving an AI cannot be sentient is they should equally apply to a load of proteins in a bone bottle (and for the hard-of-thinking, I mean a brain in a skull).

    Ultimately, whether you believe an AI can think comes down to religion; either science provides the information that can lead to an artificial intelligence, or it's something only God can do.
    Full disclosure: I'm an atheist.

  • "At a public event in 2018, for instance, the company proudly played recordings of a voice assistant called Duplex, complete with verbal tics like 'umm' and 'mm-hm'..."

    "Mm-hmm" is not a verbal tic. It's an informal synonym for the word "yes". "Umm", on the other hand, might be classified as a verbal tic. The two words have nothing in common grammatically, except for the fact that they are informal speech and sound kind of similar.

    Also, Mr. Gebru is an imbecile. He's the sort of person who might object to

  • Universal answer to all headlines that ask a question: NO.

    However, it is a distraction, or rather a nice diversion for idiots. The guy is mental.

  • The debate is useful and necessary if only because it broaches the topic of what to do if the AI is sentient. That said, if a sentient AI is developed by a company:

    is the company or creator obliged to reveal that they have created a sentient AI?
    what are a company's (or creator's) obligations towards a sentient AI?

    AND MOST IMPORTANTLY

    what are a company's or creator's obligations towards an AI while it's sentience is being debated eg. can they 'dumb it down' so it can't pass a test for sentience?

  • The debate is useful and necessary now because it will be too late later.

    Case in point: cheaper prices vs efficient use of resources

  • I am scared by the too-fast evolution of artificial intelligence. I bet that only humans can express sincerely, without being programmed, what they feel. I doubt if a robot will discuss one day as emotionally as people topics like love, family, and dream career. I had a homework writing task on the subject of "This I believe," and I got help from free essays by experts. Therefore, I was impressed by the deep thoughts from https://samploon.com/free-essays/this-i-believe/ [samploon.com] to remain convinced that anything can

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...