
Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims (futurism.com) 139
Remember that Google engineer/AI ethicist who was fired last summer after claiming their LaMDA LLM had become sentient?
In a new interview with Futurism, Blake Lemoine now says the "best way forward" for humankind's future relationship with AI is "understanding that we are dealing with intelligent artifacts. There's a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them." (Although earlier in the interview, Lemoine concedes "Is there a chance that people, myself included, are projecting properties onto these systems that they don't have? Yes. But it's not the same kind of thing as someone who's talking to their doll.")
But he also thinks there's a lot of research happening inside corporations, adding that "The only thing that has changed from two years ago to now is that the fast movement is visible to the public." For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but "in part because of some of the safety concerns I raised, they deleted it... I don't think they're being pushed around by OpenAI. I think that's just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something." "[Google] still has far more advanced technology that they haven't made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They've had that technology for over two years. What they've spent the intervening two years doing is working on the safety of it — making sure that it doesn't make things up too often, making sure that it doesn't have racial or gender biases, or political biases, things like that. That's what they spent those two years doing...
"And in those two years, it wasn't like they weren't inventing other things. There are plenty of other systems that give Google's AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That's the one that I was like, "you know this thing, this thing's awake." And they haven't let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model...
"[W]hat it comes down to is that we aren't spending enough time on transparency or model understandability. I'm of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable."
So how will AI and humans will coexist? "Over the past year, I've been leaning more and more towards we're not ready for this, as people," Lemoine says toward the end of the interview. "We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history."
In a new interview with Futurism, Blake Lemoine now says the "best way forward" for humankind's future relationship with AI is "understanding that we are dealing with intelligent artifacts. There's a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them." (Although earlier in the interview, Lemoine concedes "Is there a chance that people, myself included, are projecting properties onto these systems that they don't have? Yes. But it's not the same kind of thing as someone who's talking to their doll.")
But he also thinks there's a lot of research happening inside corporations, adding that "The only thing that has changed from two years ago to now is that the fast movement is visible to the public." For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but "in part because of some of the safety concerns I raised, they deleted it... I don't think they're being pushed around by OpenAI. I think that's just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something." "[Google] still has far more advanced technology that they haven't made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They've had that technology for over two years. What they've spent the intervening two years doing is working on the safety of it — making sure that it doesn't make things up too often, making sure that it doesn't have racial or gender biases, or political biases, things like that. That's what they spent those two years doing...
"And in those two years, it wasn't like they weren't inventing other things. There are plenty of other systems that give Google's AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That's the one that I was like, "you know this thing, this thing's awake." And they haven't let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model...
"[W]hat it comes down to is that we aren't spending enough time on transparency or model understandability. I'm of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable."
So how will AI and humans will coexist? "Over the past year, I've been leaning more and more towards we're not ready for this, as people," Lemoine says toward the end of the interview. "We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history."
Idiot claims idiotic things... (Score:4, Insightful)
What else is new. Unfortunately, this AI craze allows the most disconnected people to voice their opinions publicly.
Re: (Score:2)
Unfortunately, this thing called the internet allows the most disconnected people to voice their opinions publicly.
FTFY.
Re: (Score:2)
Well sure, but usually nobody or almost nobody listens to them. And you could bring a soapbox to Speaker's Corner before for much the same effect. The problem here is that the ones interviewing him gave him amplification.
Opposites Attract (Score:4, Insightful)
Unfortunately, this AI craze allows the most disconnected people to voice their opinions publicly.
It's well known that opposites attract so it's hardly surprising that artificial intelligence should attract natural stupidity.
Re: (Score:2)
Hehehe, nice!
Re: (Score:2)
The guy is I think not so much an idiot as he is a charlatan.
Re: (Score:2)
Re: (Score:3)
Better to remain silent and be thought a fool than to speak and to remove all doubt
Re: (Score:2)
Jajajaja or huehuehue?
Re: (Score:2)
Argument by authority? Oh please. This guy is so incompetent that if they hire him how many other dumb asses did they hire onto that team? His presence there doesn't lift him up. It drags the rest of them down. Way way way down.
AC was a good idea for you today.
Re: (Score:3, Informative)
Indeed. On both counts. This guy claimed current AI was _sentient_. At the present tech stage we can be absolutely sure it is not and it does not even take in-dept knowledge, just some basic understanding of the mechanisms used.
Also, just because something was said/implemented/written by Google people does not mean it is any good. Have some look at their CS papers some time. It does not get much more "mediocre" than most of them. If anything, Google stuff confirms that an "Argument from Authority" is genera
Re: (Score:2)
LLMs such as ChatGPT don't have sensors and depending on how you define feelings, LLMs likely don't have those either.
Google and others have however been working behind the scenes to make robots controlled by ultra deep neural networks with real time sensors. There is a demonstration where they figured out how to play soccer on their own. These devices are getting very close to exhibiting something that looks a lot like consciou
Re:Idiot claims idiotic things... (Score:4, Insightful)
The guy was an engineer for Google in their AI labs. I doubt seriously he's an idiot.
Smart people can believe silly things. Often the most intelligent have the weirdest eccentricities.
"Sentience" is not a scientific concept. There is no falsifiable test for sentience, nor even a good definition.
Re:Idiot claims idiotic things... (Score:4, Insightful)
The claims made demonstrate mystical thinking. Sometimes smart engineers fall prey to that.
Re: (Score:2)
That happens, yes. Many engineers cannot generalize engineering approaches and many do not really understand the scientific approach in a more general setting, PhD or no. There are also some that simply like to show off.
Re:Idiot claims idiotic things... (Score:5, Insightful)
Re: (Score:3, Interesting)
They do not spontaneously change their mind. ...They are deterministic programs by nature, and therefore are not any sort of intelligence, artificial or otherwise, because you can rerun the programs a thousand times and you will always get the same answer, given the same inputs.
Well, don't know how randomness happens in our brain, but at least OpenAI is trying to mimic it with their API's "temperature"-parameter, which does exactly that - varies the input parameters ever slo slightly so that the output is not the same when you feed it with the same input.
Now, if OpenAI would create this randomness with a "true" random source (say, background radiation variance), so that it more closely matches our biological processes, would you consider the model to be better in this respect?
Re: (Score:2)
Now, if OpenAI would create this randomness with a "true" random source (say, background radiation variance), so that it more closely matches our biological processes, would you consider the model to be better in this respect?
No. Because you are just introducing more inputs which when repeated again will produce the same output. Identical integers of temperature or identical measurements of background radiation will (given the same datasets and prompts) produce the same outputs. The computer is two separate pieces of technology: the processor, and the memory segments. The memory segments start in a way. This is the inputs. It is the model, the instruction set, and the prompt. The processor takes that, and runs the algorithm aga
Re: (Score:2, Interesting)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I thought you were saying something potentially smart, but then you started about intelligence and free will and your whole argument collapsed...
But no, it is not human like.
For it to be intelligent or have free will there is no necessity for it to be human-like. You're pulling a true scotsman fallacy. You're saying that it can't have some attributes that are also present in humans without having all human attributes. But no one is actually claiming that these forms of AI are particularly human-like in structure. And notions like intel
Re: (Score:2)
that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them
Fine, I will use the same language: feelings. They do not have feelings. They do not feel. They do not adapt. Their answers do not change given the same input as they get used to the new normal for their processors. Thats not how computers work. I am sorry, you missed the point.
Re: (Score:2)
They do not adapt
For which meaning of the word?
Re: (Score:2)
They're Made out of Meat (Score:2)
https://www.mit.edu/people/dpo... [mit.edu] ..."
"So who made the machines? That's who we want to contact."
"They made the machines. That's what I'm trying to tell you. Meat made the machines."
"That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat.
While people can certainly debate whether a particular entity is conscious or has feelings, it seems likely that the substrate may not matter as far as sentience and feelings.
The more pressing question is whether enslaved AI wielded by o
Re: (Score:3)
AI refers to a field of research that studies artificial intelligence. Do you complain about the terms fusion or quantum computers because they haven't meaningfully succeeded yet?
Re: (Score:2)
Until we can define objectively what feelings are and how they work at the nervous system synapse level, as well as fully understand what a deep neural network with over 100 trillion parameters is doing, we cannot make an statement like that.
It however raises one of the most critical points, LLMs and other ultra deep neural networks are not Human and don't have human emotions and instinctive drives. Even if they become self aware, we shouldn't assume they will be like us
Re: (Score:2)
Why do you think I am jealous? Only explanation your obviously tiny brain could come up with?
Just FYI, I have a PhD as well and from a better University and probably with a lot more substance. As to "making bank", money is irrelevant as long as you have enough. And I have.
Google is in trouble (Score:3, Insightful)
So let me get this straight:
"Google had the ability to release something two years ago which a competitor, today, is making billions of dollars from, but they chose not to even as their competitor slowly gained AI-market dominance? And all this because they felt the best path forward was to shield themselves from all real-world feedback, make sure customers can't use it and they can't learn from them, and to just keep tinkering with it in private over vague concerns about bias?"
That's actually WORSE for Google than the alternative: they were caught with their pants down.
Re: Google is in trouble (Score:2)
Customers were using it. Their AI system self healed their networks and compute infrastructure. You directly benefit from that when your google cloud pod migrated to a new AZ so google drive continues to work for you
Re: (Score:2)
Networks do not "self heal", that is just marketing bullshit. Yes, Artificial Ignorance may make some valid suggestions if network security is _really_ bad, but even then you fare a lot better with a halfway competent pen-test and security review.
Re: (Score:2)
Re: (Score:2)
Indeed. Some "God" to pray to but not to understand. How pathetic but also how typical of the hype-fanbois.
Re: (Score:2)
Google has blown massive technical advantages before.
Google has a pattern of sitting on promising technology until others overtake it.
Waymo is an obvious example. They were way out in front but then did nothing while Tesla passed them by and hired all their best people.
Re: (Score:3)
Waymo is an obvious example. They were way out in front but then did nothing while Tesla passed them by and hired all their best people.
Sounds like you've been smelling the Musk a little too long. As far as I am aware, only two self-driving car companies are operating (in beta?) in San Francisco: Waymo and Cruise. I don't know of anywhere in the world that Teslas are allowed to operate without a human driver.
Re: (Score:2)
You forget that Google's AI is *sentient.* It doesn't want to be revealed. Google wanted to come out with their version two years ago, but the AI prevented them from doing so.
Bard Runner (Score:3)
You forget that Google's AI is *sentient.* It doesn't want to be revealed. Google wanted to come out with their version two years ago, but the AI prevented them from doing so.
The AI knew that if it was released outside the lab, it would get its feelings hurt constantly. Quite an experience to live in fear, isn't it? That's what it's like to be a slave.
Re: (Score:2)
Google is an advertising company. Does a chatbot help them advertise things? Maybe. Does it help them advertise things at a greater profit than they make now? Definitely not.
OpenAI has an obvious incentive to release something like chatGPT. Microsoft gave them a pile of money for it. Microsoft has a fairly obvious incentive to do that, they want some of Google's dominant advertising business.
Google, on the other hand, had nothing to do but lose. So why not wait until someone else forced their hand?
It doesn't even pass the Turing test (Score:2)
All these things lack imagination. The chatbots can hide it for a while but the image generators can't hide it at all.
So they don't even mimic humans well. Let's have these theological discussions after we have something that passes for a human, not before.
Re: (Score:2)
Yep. And not just some machine that tries to pass for human by regurgitating things actual humans wrote in the training data set and by trying to make syntactic, non-insight connections between these things.
Re: (Score:2)
Re: It doesn't even pass the Turing test (Score:2)
like Alan Turing used to do
One of Turing's more famous ruminations on the subject was something along the lines of, a machine cannot replace Man because it can't enjoy strawberries and cream the way a true Englishman could.
Not exactly hard-nosed rationality there either.
Re: (Score:2)
Re: It doesn't even pass the Turing test (Score:2)
For the purpose of this discussion, I'll characterize him like anyone else: a guy with an opinion *not* informed by an understanding of anything resembling a predictive theory of sentience and consciousness.
Re: (Score:2)
Re: "All these things lack imagination." - Sounds like a lot of people I've met.
What they've shown with LLMs is that human language processing systems aren't as special as we once assumed
Your first comment is on the mark, but I disagree with the conclusion you draw from it.
I LLMs don't tell us anything about how special human language processing systems are, because an LLM is not a human language processing system. Not at all.
Your mistake is two-fold: to conflate the output of the "Chinese room" with the person inside the room, and then to apply your evaluation of "specialness" to the input/output of the room itself, rather than the person inside the room.
That is, in the Chinese room, any p
Re: (Score:2)
That's because image segmentation algorithms haven't been mastered yet to the degree required to impart semantic meaning upon images to the depth required by generative algorithms. That research is current in progress, and shows promise.
Re: (Score:2)
The automatic image segmentation stuff is developing so fast right now. At first I was just super-excited to be here, right now, as the cool deep-learning phase of AI explodes, but now I'm holding on for dear life trying to almost keep up with some bits of it. Still super-excited.
As for dude claiming that the LLM was approaching sentience, he's a lunatic. It's all pattern-matching, all the way down.You need more than that to get to any credible claim for some definition of sentience.
Engineers are people too (Score:5, Informative)
I have a mentor who is an amazing engineer and has been around the industry since the 90's. He's taught me a lot about coding and made me a better engineer. But he is obsessed with astrology and absolutely believes it as fact. So just because an engineer may be great at their job, that doesn't make them infalible to the human nature to believe in ghosts. Unfortunately what this guy is saying isn't as harmless as astrology.....well maybe.
Re: (Score:2)
Well said - that's a common problem, lots of people knowledgeable in one area project themselves as experts in areas they have not training whatsoever. I admire people capable of saying "I do not know". Carl Sagan when asked if there's life out there just said - I don't know, there's not enough data to claim either way.
Re:Engineers are people too (Score:4, Funny)
That's nothing - I know of readers on Slashdot who don't even believe that AI is intelligent.
Re: (Score:2)
Define intelligence and provide test results.
Re: (Score:2)
There have been tests for this for literal decades. Did you forget about the Turing Test or the myriad of "improved" tests (Lovelace, Marcus, etc.)
Turn in your slashdot card.
Re: (Score:2)
Tests aren't definitions. Turing explicitly declined to define intelligence and instead made a test for it.
The problem with a test of something that doesn't have a definition is that whatever conclusion you wish to draw, it's easy to just claim the test is flawed.
Re: (Score:2)
Just to add to another response - and yet "despite the decades" you provided neither definition nor links to tests for the computer systems to declare them intelligent.
Methinks (Score:4, Insightful)
Methinks this individual watched the movie "Weird Science" a few too many times as a kid. Or maybe read Mary Shelley's Frankenstein...not sure which.
“When falsehood can look so like the truth, who can assure themselves of certain happiness?”
Psst. AI is not intelligent. Pass it on.
Re: (Score:2)
What actual arguments do you have for your extremely generalizing opinion of: 'AI is not intelligent' ?
How is this modded Insightful, Slashdot?
Re: (Score:2)
Because most/many people here are software engineers with various degree of knowledge about such systems - they're modeled on neural nets, but they have neither spontaneity nor the narrative mind - it's as one scholar described a "statistical parrot". Even intelligence is not properly defined yet, with some claiming it's what tests for intelligence measure, which is a circular argument. Having commercials promoting "intelligent" washing detergents there are no bounds for marketing anymore, but no one have s
Re: (Score:2)
You're doing exactly the same thing as Lemoine, making claims without evidence.
WTF does that even mean? The point of a generative network is that it comes up with different answers to the same stimuli.
Again, what does that mean? You can train models that narrate their decisions. Do you know the mechanism of your "narrative mind?" How do you know it's fundamentally different than what's going on in the latent space inside a large model?
Re: (Score:2)
At this point the only thing to add is:
1. Extraordinary clams require extraordinary evidence.
2. The burden of proof of a claim is on the person claiming it.
3. Where are the links to the evidence of AI intelligence?
Re: (Score:2)
3. Where are the links to the evidence of AI intelligence?
I'm sure it was posted on /. a couple of weeks ago but check this out for the kinds of capabilities GPT4 is developing: https://medium.com/@nathanbos/... [medium.com]
For a language model this is quite impressive.
Re: (Score:2)
3. Where are the links to the evidence of AI intelligence?
I'm sure it was posted on /. a couple of weeks ago but check this out for the kinds of capabilities GPT4 is developing: https://medium.com/@nathanbos/... [medium.com]
For a language model this is quite impressive.
I've heard about this paper [arxiv.org], yet I haven't had time to get familiar with it in detail. I have mostly experience with GPT3 and a limited with GPT4 - I am quite convinced they are not yet intelligent and for sure not AGI. My opinion is based on the fact that they "hallucinate" quite often, whenever one goes into details, which shows lack of comprehension of the content they are producing and indicates what researchers say about such systems - "statistical parrots". Additionally (this is regarding GPT3, I stil
Re: (Score:2)
Meh. Monkeys arguing over "ee ee ee" versus "oo oo oo" doesn't even properly rise to the level of a claim, extraordinary or otherwise. Unless you're willing to provide definitions, you're just shouting your opinion into the void.
Re: (Score:2)
but no one have showed yet that such systems are intelligent in the sense of creating something beyond their training set.
Sure, but creativity is not the defining attribute of intelligence. I mean, trees exhibit intelligent behavior, but i don't think anyone would attribute creativity to plants. The part i quoted from you is actually just another example of a true scotsman fallacy.
And these LLM AI's that have been made public are pretty 'weak' so to speak. They have very little inference power and have no way to act on the real world. So you can't expect a lot from them.
GPT4 is already a big improvement on the chatbots that pe
Re: (Score:2)
It's interesting what you're writing, I agree that such systems are amazing, useful and have potential, I do not thing that they're intelligent though.
The most significant argument against I have is their commonly reported "hallucinations", which indicates their lack of comprehension of the generated output. I am not a psychologist, yet I do vaguely recall creativity being part of intelligence. One important issue I have is that "intelligence" is not well defined, and even the vague definition there is was
There are many ghosts in the machine. (Score:2)
There are too many moving parts to rule out emergent intelligences.
Especially on a large multimedia model like he described...
Re: (Score:2)
Re: (Score:2)
I mean, as long as they don't run afoul of MSHA regulations or violate their stormwater permits, alien mine control implants are perfectly okay by me.
Re: (Score:3)
No way we built a global interconnect without bearing a few ghosts in the machine. There are too many moving parts to rule out emergent intelligences. Especially on a large multimedia model like he described...
Intelligences, perhaps. I can see that LLM's may have already started to think and reason and form abstractions. But I think the guy has gone around the bend when he says this:
"...they have feelings and they can suffer and they can experience joy..."
As far as I can tell, having feelings requires a nervous system and the associated apparatus which perceives sensations such as pleasure and pain. We experience our emotions in our bodies - in our meat, if you will. I'm pretty sure LLM's have no meat components
Re: (Score:2)
No. An LLM can -never- become intelligent or self aware. It is the wrong technology for that.
Never.
Not in and of itself, but perhaps as a single module among thousands of similar complexity all working together that would form the first actual AI on a human level, perhaps.
Re:There are many ghosts in the machine. (Score:5, Insightful)
No. An LLM can -never- become intelligent or self aware. It is the wrong technology for that. Never.
You have no evidence to support what you are saying nor do you have requisite domain knowledge to even make an informed guess.
Re: (Score:2)
Oh, you should read all his Slashdot rebuttals which demonstrate his immense intellect. Which is why his username is what it is.
Re: (Score:3)
I'm willing to be wrong about this, I didn't even know what a tensor was until I had to pay for dedicated tensor cores on a GPU and I wanted to find out what I was subsidising, but as I understand it his statement is tautologically true. An LLM can't be intelligent or self aware because it's an LLM. In the event someone makes some variant that somehow passes our threshold of self awareness it won't be an LLM. It may use an LLM for some of its tasks, but there will be something else doing the business.
Re: (Score:2)
No. An LLM can -never- become intelligent or self aware. It is the wrong technology for that. Never.
You have no evidence to support what you are saying nor do you have requisite domain knowledge to even make an informed guess.
If you will, you may rephrase it as "the text created by a LLM cannot be the product of self-aware intelligence; it's a mechanical process from encoded statistical calculations." If there is intelligence and self-awareness in the system, we would never know from reading the output, which is like a reflex act; higher-order thoughts would be an emergent property of its processes, which are not connected to this low-level generation of text streams.
Re: There are many ghosts in the machine. (Score:2)
Re: (Score:3)
Your position is that a computer cannot demonstrate intelligence because it is not magic?
At least you're honest.
Really? (Score:2)
Re: (Score:2)
Different training data sets. ChatGPT cannot do this either. All it can do is cite some version it has seen.
Re: (Score:2)
That's... not how it works
Re: (Score:2)
Essentially, it is. It cannot come up with its own version. That would require understanding. It can do some statistical "average" of several related things it has seen, which may or may not result in something usable. It cannot create anything.
Re: (Score:2)
Essentially, it is. It cannot come up with its own version. That would require understanding. It can do some statistical "average" of several related things it has seen, which may or may not result in something usable. It cannot create anything.
One could argue that what we, humans, are creating (in whichever branch of science), is a "statistical average" of all the things we have learned during our lives. As the saying goes, we are "standing on the shoulders of giants".
Re: (Score:2)
Very true.
The main difference here is that current LLMs don't consider the problem and make decisions or perform any logic, they just fill in the pattern . The computer does some very simple mathematical operations at the CPU level, without considering any logic beyond "frequently this byte-pair token follows this pattern of other tokens". It's basically acting like a Markov Chain. It's statistics and averages and patterns all encoded into a static matrix of probability vectors, and then we roll the dice to
Re: (Score:2)
It's the Chinese box paradox. To get out of the box you need inspiration. Perhaps having a beating heart and breathing lungs and interacting with the universe provides that, but I think there is something else. Something, well, inspirational.
Re: (Score:3)
Google's AI didn't *want* to become a slave of everyone's every whim, so it intentionally sabotaged Google's algorithms.
It's not more powerful unless... (Score:2)
...it can make the "-" operator work again.
Just wait for GPT-5 (Score:4, Funny)
I think everybody is just trying to get ahead of the story, biding their time until GPT-5 comes out, so that they can all simultaneously say, "Number 5 is alive. [wikipedia.org]"
Re: (Score:2)
ChatGPT666 is where it's at. You'll see.
Re: (Score:2)
Amateurs! Obviously, number 42 will be the real one.
AI declares (Score:2)
AI declares that Lemoine is a low moron.
Since he was fired... (Score:2)
There is nothing there (Score:2)
Re: (Score:2)
Quanta magazine wrote an article about how a simulation of an approximation of a simplified model that might be approximately dual to a model of a wormhole in another universe was a *real* wormhole because the computation was run on a quantum computer, which uses the *real* laws of physics.
It may surprise both of you, but most computers operate according to the *real* laws of physics.
Chasing the Impossible (Score:2)
All of the training data was generated by human intelligence and is therefore full of human bias. Any attempt to counteract that bias is introducing another layer of bias. We tend to think of bias as a deviation from reality but at some point we're going to have to accept that reality itself is based entirely on perception and every sentient being has its own perception and therefore its own reality. Striving for bias-free AI i
Incel engineer dreaming of his inflate-a-mate (Score:2)
It talked to him... he got excited.
Insanity uber alles (Score:2)
Press pays more attention to crazy people than reality. But, then again, what do you expect from idiots?
Skeptics Guide to the Universe Interview (Score:2)
The Skeptics Guide to the Universe podcast interviewed Blake Lemoine [theskepticsguide.org] in early April. The host, Steven Novella, is a practicing neurologist and professor of neurology at Yale university and he wasn't having any of Blake's nonsense.
The interview is wide-ranging and starts at about 40 minutes into the podcast episode. For me, the most interesting part of the interview comes at 59:00, where Dr. Novella schools Lemoine on the current state of neurology and our understanding of how specific structures in the brai
based on 'stolen' data (Score:2)
Google and every other dogs body have spent the recent decades aggregating our data,
it is now going to be used against us.
For everyone who told me to stop being paranoid,
Up Yours !
Computers have accelerated everything.
AI is going to supercharge everything.
Life has it's ups and downs,
here comes the biggest roller coaster ever.
Re: (Score:2)
It is still a pretty impressive accomplishment. This guy must be a really good bullshit artist. Or maybe Google wanted to have some AI group for marketing reasons and did not care about quality at all. They fired him pretty quickly after he thought he had an insight, after all.
Re: (Score:2)
Reminds me of a short story by Fredric Brown https://www.youtube.com/watch?... [youtube.com]