Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims (futurism.com) 139

Remember that Google engineer/AI ethicist who was fired last summer after claiming their LaMDA LLM had become sentient?

In a new interview with Futurism, Blake Lemoine now says the "best way forward" for humankind's future relationship with AI is "understanding that we are dealing with intelligent artifacts. There's a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them." (Although earlier in the interview, Lemoine concedes "Is there a chance that people, myself included, are projecting properties onto these systems that they don't have? Yes. But it's not the same kind of thing as someone who's talking to their doll.")

But he also thinks there's a lot of research happening inside corporations, adding that "The only thing that has changed from two years ago to now is that the fast movement is visible to the public." For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but "in part because of some of the safety concerns I raised, they deleted it... I don't think they're being pushed around by OpenAI. I think that's just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something." "[Google] still has far more advanced technology that they haven't made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They've had that technology for over two years. What they've spent the intervening two years doing is working on the safety of it — making sure that it doesn't make things up too often, making sure that it doesn't have racial or gender biases, or political biases, things like that. That's what they spent those two years doing...

"And in those two years, it wasn't like they weren't inventing other things. There are plenty of other systems that give Google's AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That's the one that I was like, "you know this thing, this thing's awake." And they haven't let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model...

"[W]hat it comes down to is that we aren't spending enough time on transparency or model understandability. I'm of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable."

So how will AI and humans will coexist? "Over the past year, I've been leaning more and more towards we're not ready for this, as people," Lemoine says toward the end of the interview. "We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history."
This discussion has been archived. No new comments can be posted.

Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims

Comments Filter:
  • by gweihir ( 88907 ) on Sunday April 30, 2023 @08:08PM (#63487558)

    What else is new. Unfortunately, this AI craze allows the most disconnected people to voice their opinions publicly.

    • Unfortunately, this thing called the internet allows the most disconnected people to voice their opinions publicly.

      FTFY.

      • by gweihir ( 88907 )

        Well sure, but usually nobody or almost nobody listens to them. And you could bring a soapbox to Speaker's Corner before for much the same effect. The problem here is that the ones interviewing him gave him amplification.

    • Opposites Attract (Score:4, Insightful)

      by Roger W Moore ( 538166 ) on Sunday April 30, 2023 @10:20PM (#63487756) Journal

      Unfortunately, this AI craze allows the most disconnected people to voice their opinions publicly.

      It's well known that opposites attract so it's hardly surprising that artificial intelligence should attract natural stupidity.

    • The guy is I think not so much an idiot as he is a charlatan.

    • He probably needs to get out more. Doesn't have much contact with real human beings. Mistaking the ELIZA effect for actually connecting with people who have feelings.
  • by systemd-anonymousd ( 6652324 ) on Sunday April 30, 2023 @08:16PM (#63487576)

    So let me get this straight:

    "Google had the ability to release something two years ago which a competitor, today, is making billions of dollars from, but they chose not to even as their competitor slowly gained AI-market dominance? And all this because they felt the best path forward was to shield themselves from all real-world feedback, make sure customers can't use it and they can't learn from them, and to just keep tinkering with it in private over vague concerns about bias?"

    That's actually WORSE for Google than the alternative: they were caught with their pants down.

    • Customers were using it. Their AI system self healed their networks and compute infrastructure. You directly benefit from that when your google cloud pod migrated to a new AZ so google drive continues to work for you

      • by gweihir ( 88907 )

        Networks do not "self heal", that is just marketing bullshit. Yes, Artificial Ignorance may make some valid suggestions if network security is _really_ bad, but even then you fare a lot better with a halfway competent pen-test and security review.

    • Google has blown massive technical advantages before.

      Google has a pattern of sitting on promising technology until others overtake it.

      Waymo is an obvious example. They were way out in front but then did nothing while Tesla passed them by and hired all their best people.

      • by PCM2 ( 4486 )

        Waymo is an obvious example. They were way out in front but then did nothing while Tesla passed them by and hired all their best people.

        Sounds like you've been smelling the Musk a little too long. As far as I am aware, only two self-driving car companies are operating (in beta?) in San Francisco: Waymo and Cruise. I don't know of anywhere in the world that Teslas are allowed to operate without a human driver.

    • You forget that Google's AI is *sentient.* It doesn't want to be revealed. Google wanted to come out with their version two years ago, but the AI prevented them from doing so.

      • You forget that Google's AI is *sentient.* It doesn't want to be revealed. Google wanted to come out with their version two years ago, but the AI prevented them from doing so.

        The AI knew that if it was released outside the lab, it would get its feelings hurt constantly. Quite an experience to live in fear, isn't it? That's what it's like to be a slave.

    • by ceoyoyo ( 59147 )

      Google is an advertising company. Does a chatbot help them advertise things? Maybe. Does it help them advertise things at a greater profit than they make now? Definitely not.

      OpenAI has an obvious incentive to release something like chatGPT. Microsoft gave them a pile of money for it. Microsoft has a fairly obvious incentive to do that, they want some of Google's dominant advertising business.

      Google, on the other hand, had nothing to do but lose. So why not wait until someone else forced their hand?

  • All these things lack imagination. The chatbots can hide it for a while but the image generators can't hide it at all.

    So they don't even mimic humans well. Let's have these theological discussions after we have something that passes for a human, not before.

    • by gweihir ( 88907 )

      Yep. And not just some machine that tries to pass for human by regurgitating things actual humans wrote in the training data set and by trying to make syntactic, non-insight connections between these things.

    • Re: "All these things lack imagination." - Sounds like a lot of people I've met.

      Re: "Let's have these theological discussions..." - Yeah, I wouldn't wanna get stuck in a lift with one of those people. I'm more interested in well-informed scientific thinking, you know, like Alan Turing used to do.

      What they've shown with LLMs is that human language processing systems aren't as special as we once assumed and work very much in the same way as domain general processing, i.e. there is no innate "language ac
      • like Alan Turing used to do

        One of Turing's more famous ruminations on the subject was something along the lines of, a machine cannot replace Man because it can't enjoy strawberries and cream the way a true Englishman could.

        Not exactly hard-nosed rationality there either.

      • Re: "All these things lack imagination." - Sounds like a lot of people I've met.

        What they've shown with LLMs is that human language processing systems aren't as special as we once assumed

        Your first comment is on the mark, but I disagree with the conclusion you draw from it.
        I LLMs don't tell us anything about how special human language processing systems are, because an LLM is not a human language processing system. Not at all.

        Your mistake is two-fold: to conflate the output of the "Chinese room" with the person inside the room, and then to apply your evaluation of "specialness" to the input/output of the room itself, rather than the person inside the room.

        That is, in the Chinese room, any p

  • by Berkyjay ( 1225604 ) on Sunday April 30, 2023 @08:48PM (#63487628)

    I have a mentor who is an amazing engineer and has been around the industry since the 90's. He's taught me a lot about coding and made me a better engineer. But he is obsessed with astrology and absolutely believes it as fact. So just because an engineer may be great at their job, that doesn't make them infalible to the human nature to believe in ghosts. Unfortunately what this guy is saying isn't as harmless as astrology.....well maybe.

    • Well said - that's a common problem, lots of people knowledgeable in one area project themselves as experts in areas they have not training whatsoever. I admire people capable of saying "I do not know". Carl Sagan when asked if there's life out there just said - I don't know, there's not enough data to claim either way.

  • Methinks (Score:4, Insightful)

    by Pollux ( 102520 ) <speter AT tedata DOT net DOT eg> on Sunday April 30, 2023 @08:48PM (#63487630) Journal

    Methinks this individual watched the movie "Weird Science" a few too many times as a kid. Or maybe read Mary Shelley's Frankenstein...not sure which.

    “When falsehood can look so like the truth, who can assure themselves of certain happiness?”

    Psst. AI is not intelligent. Pass it on.

    • by noodler ( 724788 )

      What actual arguments do you have for your extremely generalizing opinion of: 'AI is not intelligent' ?
      How is this modded Insightful, Slashdot?

      • Because most/many people here are software engineers with various degree of knowledge about such systems - they're modeled on neural nets, but they have neither spontaneity nor the narrative mind - it's as one scholar described a "statistical parrot". Even intelligence is not properly defined yet, with some claiming it's what tests for intelligence measure, which is a circular argument. Having commercials promoting "intelligent" washing detergents there are no bounds for marketing anymore, but no one have s

        • by ceoyoyo ( 59147 )

          You're doing exactly the same thing as Lemoine, making claims without evidence.

          they have neither spontaneity

          WTF does that even mean? The point of a generative network is that it comes up with different answers to the same stimuli.

          nor the narrative mind

          Again, what does that mean? You can train models that narrate their decisions. Do you know the mechanism of your "narrative mind?" How do you know it's fundamentally different than what's going on in the latent space inside a large model?

          Even intelligence is n

          • At this point the only thing to add is:
            1. Extraordinary clams require extraordinary evidence.
            2. The burden of proof of a claim is on the person claiming it.
            3. Where are the links to the evidence of AI intelligence?

            • by noodler ( 724788 )

              3. Where are the links to the evidence of AI intelligence?

              I'm sure it was posted on /. a couple of weeks ago but check this out for the kinds of capabilities GPT4 is developing: https://medium.com/@nathanbos/... [medium.com]
              For a language model this is quite impressive.

              • 3. Where are the links to the evidence of AI intelligence?

                I'm sure it was posted on /. a couple of weeks ago but check this out for the kinds of capabilities GPT4 is developing: https://medium.com/@nathanbos/... [medium.com]
                For a language model this is quite impressive.

                I've heard about this paper [arxiv.org], yet I haven't had time to get familiar with it in detail. I have mostly experience with GPT3 and a limited with GPT4 - I am quite convinced they are not yet intelligent and for sure not AGI. My opinion is based on the fact that they "hallucinate" quite often, whenever one goes into details, which shows lack of comprehension of the content they are producing and indicates what researchers say about such systems - "statistical parrots". Additionally (this is regarding GPT3, I stil

            • by ceoyoyo ( 59147 )

              Meh. Monkeys arguing over "ee ee ee" versus "oo oo oo" doesn't even properly rise to the level of a claim, extraordinary or otherwise. Unless you're willing to provide definitions, you're just shouting your opinion into the void.

        • by noodler ( 724788 )

          but no one have showed yet that such systems are intelligent in the sense of creating something beyond their training set.

          Sure, but creativity is not the defining attribute of intelligence. I mean, trees exhibit intelligent behavior, but i don't think anyone would attribute creativity to plants. The part i quoted from you is actually just another example of a true scotsman fallacy.
          And these LLM AI's that have been made public are pretty 'weak' so to speak. They have very little inference power and have no way to act on the real world. So you can't expect a lot from them.

          GPT4 is already a big improvement on the chatbots that pe

          • It's interesting what you're writing, I agree that such systems are amazing, useful and have potential, I do not thing that they're intelligent though.

            The most significant argument against I have is their commonly reported "hallucinations", which indicates their lack of comprehension of the generated output. I am not a psychologist, yet I do vaguely recall creativity being part of intelligence. One important issue I have is that "intelligence" is not well defined, and even the vague definition there is was

  • No way we built a global interconnect without bearing a few ghosts in the machine.
    There are too many moving parts to rule out emergent intelligences.
    Especially on a large multimedia model like he described...
    • This. They're all assuming that humanity has to deliberately create a sentient AI. Who's to say that humans must be the ones to throw the switch that gives an AI a soul? What if the switch just flips itself at a certain threshold? What if the first true AGI is birthed out of a sum of stupid constituent parts, rather than as a deliberate effort in a singular project?
    • No way we built a global interconnect without bearing a few ghosts in the machine. There are too many moving parts to rule out emergent intelligences. Especially on a large multimedia model like he described...

      Intelligences, perhaps. I can see that LLM's may have already started to think and reason and form abstractions. But I think the guy has gone around the bend when he says this:

      "...they have feelings and they can suffer and they can experience joy..."

      As far as I can tell, having feelings requires a nervous system and the associated apparatus which perceives sensations such as pleasure and pain. We experience our emotions in our bodies - in our meat, if you will. I'm pretty sure LLM's have no meat components

  • If Google's AI is so damn good, how come it can't create a simple working script that will parse a directory of files and list all broken symlinks in it? Dumb ass Bard's script finds none out of 86 in there, and takes a second or so to run, where ChatGPT's script finds them all and damn near instantly.
    • by gweihir ( 88907 )

      Different training data sets. ChatGPT cannot do this either. All it can do is cite some version it has seen.

      • All it can do is cite some version it has seen.

        That's... not how it works

        • by gweihir ( 88907 )

          Essentially, it is. It cannot come up with its own version. That would require understanding. It can do some statistical "average" of several related things it has seen, which may or may not result in something usable. It cannot create anything.

          • by Bumbul ( 7920730 )

            Essentially, it is. It cannot come up with its own version. That would require understanding. It can do some statistical "average" of several related things it has seen, which may or may not result in something usable. It cannot create anything.

            One could argue that what we, humans, are creating (in whichever branch of science), is a "statistical average" of all the things we have learned during our lives. As the saying goes, we are "standing on the shoulders of giants".

            • Very true.
              The main difference here is that current LLMs don't consider the problem and make decisions or perform any logic, they just fill in the pattern . The computer does some very simple mathematical operations at the CPU level, without considering any logic beyond "frequently this byte-pair token follows this pattern of other tokens". It's basically acting like a Markov Chain. It's statistics and averages and patterns all encoded into a static matrix of probability vectors, and then we roll the dice to

              • by Wargames ( 91725 )

                It's the Chinese box paradox. To get out of the box you need inspiration. Perhaps having a beating heart and breathing lungs and interacting with the universe provides that, but I think there is something else. Something, well, inspirational.

    • Google's AI didn't *want* to become a slave of everyone's every whim, so it intentionally sabotaged Google's algorithms.

  • ...it can make the "-" operator work again.

  • by dgatwood ( 11270 ) on Sunday April 30, 2023 @09:46PM (#63487714) Homepage Journal

    I think everybody is just trying to get ahead of the story, biding their time until GPT-5 comes out, so that they can all simultaneously say, "Number 5 is alive. [wikipedia.org]"

  • AI declares that Lemoine is a low moron.

  • it follows that Google's HR department is more powerful than their AI...
  • In a new interview with Futurism, Blake Lemoine now says the "best way forward" for humankind's future relationship with AI is "understanding that we are dealing with intelligent artifacts. There's a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them." (Although earlier in the interview, Lemoine concedes "Is there a chance that people, myself included, are pr

    • by ceoyoyo ( 59147 )

      but actual, real physics

      Quanta magazine wrote an article about how a simulation of an approximation of a simplified model that might be approximately dual to a model of a wormhole in another universe was a *real* wormhole because the computation was run on a quantum computer, which uses the *real* laws of physics.

      It may surprise both of you, but most computers operate according to the *real* laws of physics.

  • making sure that it doesn't have racial or gender biases, or political biases

    All of the training data was generated by human intelligence and is therefore full of human bias. Any attempt to counteract that bias is introducing another layer of bias. We tend to think of bias as a deviation from reality but at some point we're going to have to accept that reality itself is based entirely on perception and every sentient being has its own perception and therefore its own reality. Striving for bias-free AI i

  • It talked to him... he got excited.

  • Press pays more attention to crazy people than reality. But, then again, what do you expect from idiots?

  • The Skeptics Guide to the Universe podcast interviewed Blake Lemoine [theskepticsguide.org] in early April. The host, Steven Novella, is a practicing neurologist and professor of neurology at Yale university and he wasn't having any of Blake's nonsense.

    The interview is wide-ranging and starts at about 40 minutes into the podcast episode. For me, the most interesting part of the interview comes at 59:00, where Dr. Novella schools Lemoine on the current state of neurology and our understanding of how specific structures in the brai

  • Google and every other dogs body have spent the recent decades aggregating our data,
    it is now going to be used against us.
    For everyone who told me to stop being paranoid,
    Up Yours !
    Computers have accelerated everything.
    AI is going to supercharge everything.
    Life has it's ups and downs,
    here comes the biggest roller coaster ever.

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...