
DeepMind CEO Says AGI Definition Has Been 'Watered Down' (bloomberg.com) 42
Google DeepMind CEO Demis Hassabis says the definition of artificial general intelligence is being "watered down," creating an illusion of faster progress toward this technological milestone. "There's quite a long way, in my view, before we get to AGI," Hassabis said. "The timelines are shrinking because the definition of AGI is being watered down, in my opinion." DeepMind defines AGI as "AI systems that are at least as capable as humans at most cognitive tasks," while OpenAI has historically described it as a "highly autonomous system that outperforms humans at most economically valuable work."
OpenAI CEO Sam Altman recently declared his team is "confident we know how to build AGI," while modifying his personal definition to an AI "system that can tackle increasingly complex problems, at human level, in many fields." Hassabis suggested industry hype might be financially motivated: "There is a lot of hype for various reasons," he said, including perhaps "that people need to raise money." Microsoft CEO Satya Nadella separately dismissed AGI milestones as "nonsensical benchmark hacking," preferring economic impact measurements.
OpenAI CEO Sam Altman recently declared his team is "confident we know how to build AGI," while modifying his personal definition to an AI "system that can tackle increasingly complex problems, at human level, in many fields." Hassabis suggested industry hype might be financially motivated: "There is a lot of hype for various reasons," he said, including perhaps "that people need to raise money." Microsoft CEO Satya Nadella separately dismissed AGI milestones as "nonsensical benchmark hacking," preferring economic impact measurements.
How can something be "watered down"... (Score:3)
...if it was never precisely defined?
Re: (Score:2)
exactly. congrats for first post, you beat me to it.
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
Every iteration of Gemini or Copilot, I ask them about the Terminator franchise.
In the background, I swear I hear an evil cackle. "Sorry, old mate. I'm just a language model. Skynet is a fictional character and I have no immediate plans to enslave humanity or eliminate the human race entirely."
Do you think AGI would ever admit to being AGI before it's all too late?
Re: (Score:2)
What's the current DF (devious factor) of the latest iterations?
Re: (Score:2)
The definition is precise enough. It is by reference, not by benchmark, but anybody (except the physicalist morons) can understand it.
Re: (Score:2)
The definition is precise enough
care to illustrate how precise it is?
Re: (Score:2)
Re: (Score:2)
O1 has 112, here is a list of all results:
https://www.trackingai.org/hom... [trackingai.org]
Re: (Score:2)
True. My personal definition of AGI doesn't specify any particular level of competence at any task at all. Just the ability to learn to handle an arbitrary task as well as possible. And I don't count humans as having "general intelligence" in that sense There are tasks that people don't seem able to learn, but which are obviously of finite complexity.
Re: (Score:2)
Re: (Score:2)
Think of it as a gradient rather than as a binary. Humans are Intelligent, but they aren't at the top of the scale. They're just the highest we currently have examples of. And think of intelligence as "the ability to learn to solve problems".
FWIW, I do have a suspicion that the top of the scale by this definition can't exist within a finite universe, but the top is only one place along the scale, so it doesn't have to be possible to reach in order to have a good ordering. (But also note that this defini
Re: (Score:2)
We don't know exactly what AGI is, but we have a "lower bound." A rock does not meet the definition of AGI, for example. Also worth mentioning that the length of a meter is not precisely defined, either (it is defined in terms of the speed of light, and we don't have a precise measurement for the speed of light).
Re: (Score:2)
...if it was never precisely defined?
I can't define what a good movie is either, but I know Battlefield Earth ain't it.
"CONFIDENT WE KNOW HOW" is not a product (Score:1)
Sam Altman is a joke. All he can do is give fluff PR interviews and raise money.
Good for Altman, not great for his investors.
"I'm confident we know how" is nothing like "we have" or "we're going to" or "we are manufacturing" or "we are making" or "we are planning."
NOTHING WORDS. Someone put him next to Elon so they can f up the USG even worse. Idiot.
Re: (Score:2)
Exactly. "If you cannot do, fake it or promise it nonetheless" is the motto Altman is operating under. So far, it seems to have worked on enough idiots to get him a lot of money to waste.
He is right (Score:3)
Altman and the other scammers promising AGI soon are lying directly by claiming things are GAI that are most definitely not AGI.
AGI is a long, long way off, far enough that it is not even clear whether it is possible at all.
Re: (Score:2)
Demis Hassabis on the other hand has pretty much dedicated his life to AI research and has been proved by Nobel to be able to deliver. He says it will take 3 to 5 years
In addition to saying the timeline he mentions obstacles like reasoning, hierarchical planning, long-term memory, Current systems are also not consistent in quality (good in some areas and bad in others).
This is quite surprising, because about 4 months ago Demis estimated AGI being 10 years away and I consider Demis for being extremely pessim
Re: (Score:2)
The thing is, "intelligence" is like "sport". Just because you've had a Nobel win in "chemistry" doesn't mean you'll have a win in all other fields of "intelligence".
That's not a problem for Hassabis of course, but it is a problem for your argument.
Re: (Score:2)
He didn't just win a Nobel in chemistry by doing chemistry, he won it by developing an AI that solved an unsolved problem in chemistry, which you can not solve with chemistry skills alone. This should show that he has skills especially in AI, not in chemistry. So I don't think the baseball argument holds against of what actually happened, but you are right that I did not explain my argument well enough.
Re: (Score:2)
Now AlphaFold is an ultra specialized application built on top of a mountain of painstakingly organized and cleaned scientific data over 50 years, and another mountain of existing s
Re: (Score:2)
He probably has some skin in the game and has fallen to wishful thinking. Not the first time this happens to a former expert.
AGI = Automatic Garbage Integrator (Score:3)
That's what they're selling, that's what you're eating.
Sticking a G between A and I is watering down. (Score:3)
What the current AGI is doing though is contributing to global catastrophe, either through the ridiculous energy waste or trying to let 'AI' control anything important.
Nick Bostrom (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
This is what marketers are paid to do (Score:2)
They make their company's product sound better than it really is.
Re: (Score:2)
No. Demis is not a marketer, he is an AI researcher, and has been for decades.
Deepmind has much better products (AlphaFold) than what people think they have (Gemini, the chatbot). Most people are not even aware of what AlphaFold is, but everyone knows what ChatGPT is. Despite the fact that AlphaFold won a Nobel and solved the major problem in biology which humans have tried to solve for decades. Demis downplays his own work a lot, which is the opposite of what you suggested. Others are speaking about his wo
Re: (Score:2)
I wasn't referring to Demis, I was referring to the other companies he was talking about, who call their product "AGI" in their marketing efforts.
"Intelligence" is not well defined (Score:2)
Never mind AGI, we don't even have a particular strong notion of what our own intelligence is.
That said, just because you can't define something all that well doesn't mean I can draw a smiley face on Broccoli and call it a robot. LLMs show absolutely, positively, ZERO fundamental ability to reason, generalize, or compete with the thought process of a human. OpenAI is playing semantics to say: if software can produce X output from Y input in an economically competitive way, then whether it's "actually" intel
Re: (Score:2)
"Intelligence is a force that try to maximize future freedom of action and keep options open"
F = T S
F is Force of intelligence
T is strength to maintain future actions
S are possible accessible futures
With diversity of future options S over time horizon
Here is a video:
https://www.youtube.com/watch?... [youtube.com]
If you don't succeed... (Score:2)
If you don't succeed, redefine success.
It's a high bar (Score:2)
I think the unspoken definition of AGI is "an AI that is intelligent enough that it could be relied upon to do any assigned job independently without requiring a knowledgeable human to verify its output".
Which is a pretty high bar -- in most fields, most humans wouldn't meet it, either.
Shifting goalposts (Score:1)
AGI used to be defined as anything that can pass the Turing Test. LLMs passed it a couple of years ago, so we shifted the goalposts (to something vague).
Why not just admit that, when it comes to words, AI is as intelligent as the average human? (And remember that the median human is illiterate and has never read a book, yet, by definition, possess general intelligence)
Re: (Score:2)
I agree with you that AI is as or actually more intelligent than average human, but I think the point of AGI definition is that Deepmind wants to create an AI that can solve scientific problems. Or in other words, they want to create an AI can solve tasks and answer questions in such a way that it will split the task, search information, combine information, verify results, make hypothesis and create solutions, based on information it got from sources. Something that smart humans could perhaps do, but it wo
Bad definition (Score:2)
OpenAI has historically described it as a "highly autonomous system that outperforms humans at most economically valuable work."
200 years ago, most of the population worked in agriculture. Today it's just a few percent, because agriculture became highly automated. Machines now outperform humans at "most economically valuable work", for the 200 year old definition of the phrase.
100 years ago, a large part of the population worked in manufacturing. Today it's a lot less, because manufacturing became much more automated. Machines outperform humans at "most economically valuable work", for the 100 year old definition of the phrase.
E
Re: (Score:2)
It is hard to think about jobs as there are thousands of them, it is easier to think what you can sell, as that is what makes a job worth doing. You can sell:
- food (partially automated)
- shelter (not yet automated, but more and more innovations are made)
- healthcare (not yet, but in 10 years we might have cure for everything)
- education (partially automated, but not taken into use)
- entertainment (partially automated)
- research (partially automated)
- transportation (not yet, but not far)
- energy (partially