Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI

DeepMind CEO Says AGI Definition Has Been 'Watered Down' (bloomberg.com) 42

Google DeepMind CEO Demis Hassabis says the definition of artificial general intelligence is being "watered down," creating an illusion of faster progress toward this technological milestone. "There's quite a long way, in my view, before we get to AGI," Hassabis said. "The timelines are shrinking because the definition of AGI is being watered down, in my opinion." DeepMind defines AGI as "AI systems that are at least as capable as humans at most cognitive tasks," while OpenAI has historically described it as a "highly autonomous system that outperforms humans at most economically valuable work."

OpenAI CEO Sam Altman recently declared his team is "confident we know how to build AGI," while modifying his personal definition to an AI "system that can tackle increasingly complex problems, at human level, in many fields." Hassabis suggested industry hype might be financially motivated: "There is a lot of hype for various reasons," he said, including perhaps "that people need to raise money." Microsoft CEO Satya Nadella separately dismissed AGI milestones as "nonsensical benchmark hacking," preferring economic impact measurements.
This discussion has been archived. No new comments can be posted.

DeepMind CEO Says AGI Definition Has Been 'Watered Down'

Comments Filter:
  • by MpVpRb ( 1423381 ) on Friday February 28, 2025 @01:52PM (#65201583)

    ...if it was never precisely defined?

    • by znrt ( 2424692 )

      exactly. congrats for first post, you beat me to it.

      • "When I use a word" AI Dumpty said in rather a scornful tone, "it means just what I choose it to mean â" neither more nor less".
    • The definition of AGI AND the submission are possibly generated by AI?
      • Every iteration of Gemini or Copilot, I ask them about the Terminator franchise.

        In the background, I swear I hear an evil cackle. "Sorry, old mate. I'm just a language model. Skynet is a fictional character and I have no immediate plans to enslave humanity or eliminate the human race entirely."

        Do you think AGI would ever admit to being AGI before it's all too late?

        • The key is in the word "entirely". Eliminate down to 7 sterile humans for the lolz.

          What's the current DF (devious factor) of the latest iterations?
    • by gweihir ( 88907 )

      The definition is precise enough. It is by reference, not by benchmark, but anybody (except the physicalist morons) can understand it.

    • Intelligence was never been precisely defined either. You can however come up with a precise definition - an IQ test score for instance - and measure against that. How well does current AI do on an IQ test?
    • by HiThere ( 15173 )

      True. My personal definition of AGI doesn't specify any particular level of competence at any task at all. Just the ability to learn to handle an arbitrary task as well as possible. And I don't count humans as having "general intelligence" in that sense There are tasks that people don't seem able to learn, but which are obviously of finite complexity.

      • I don't see how your (untypical) definition of AGI is useful, if it doesn't include humans.
        • by HiThere ( 15173 )

          Think of it as a gradient rather than as a binary. Humans are Intelligent, but they aren't at the top of the scale. They're just the highest we currently have examples of. And think of intelligence as "the ability to learn to solve problems".

          FWIW, I do have a suspicion that the top of the scale by this definition can't exist within a finite universe, but the top is only one place along the scale, so it doesn't have to be possible to reach in order to have a good ordering. (But also note that this defini

    • The vast majority of words are not precisely defined.

      We don't know exactly what AGI is, but we have a "lower bound." A rock does not meet the definition of AGI, for example. Also worth mentioning that the length of a meter is not precisely defined, either (it is defined in terms of the speed of light, and we don't have a precise measurement for the speed of light).
    • ...if it was never precisely defined?

      I can't define what a good movie is either, but I know Battlefield Earth ain't it.

  • Sam Altman is a joke. All he can do is give fluff PR interviews and raise money.
    Good for Altman, not great for his investors.

    "I'm confident we know how" is nothing like "we have" or "we're going to" or "we are manufacturing" or "we are making" or "we are planning."

    NOTHING WORDS. Someone put him next to Elon so they can f up the USG even worse. Idiot.

    • by gweihir ( 88907 )

      Exactly. "If you cannot do, fake it or promise it nonetheless" is the motto Altman is operating under. So far, it seems to have worked on enough idiots to get him a lot of money to waste.

  • by gweihir ( 88907 ) on Friday February 28, 2025 @02:14PM (#65201639)

    Altman and the other scammers promising AGI soon are lying directly by claiming things are GAI that are most definitely not AGI.

    AGI is a long, long way off, far enough that it is not even clear whether it is possible at all.

    • by dvice ( 6309704 )

      Demis Hassabis on the other hand has pretty much dedicated his life to AI research and has been proved by Nobel to be able to deliver. He says it will take 3 to 5 years

      In addition to saying the timeline he mentions obstacles like reasoning, hierarchical planning, long-term memory, Current systems are also not consistent in quality (good in some areas and bad in others).

      This is quite surprising, because about 4 months ago Demis estimated AGI being 10 years away and I consider Demis for being extremely pessim

      • Then again, Michael Jordan dedicated his life to basketball, and has been proved to deliver. Then he tried baseball and failed.

        The thing is, "intelligence" is like "sport". Just because you've had a Nobel win in "chemistry" doesn't mean you'll have a win in all other fields of "intelligence".

        That's not a problem for Hassabis of course, but it is a problem for your argument.

        • by dvice ( 6309704 )

          He didn't just win a Nobel in chemistry by doing chemistry, he won it by developing an AI that solved an unsolved problem in chemistry, which you can not solve with chemistry skills alone. This should show that he has skills especially in AI, not in chemistry. So I don't think the baseball argument holds against of what actually happened, but you are right that I did not explain my argument well enough.

          • I don't know what your background is, but it certainly doesn't sound like it's in AI. My advice here is that it's best to think of "AI" as a large collection of ideas, similar to the simple "sport" analogy I previously mentioned. There are many subfields in AI and they all depend on completely unrelated ideas and methodologies.

            Now AlphaFold is an ultra specialized application built on top of a mountain of painstakingly organized and cleaned scientific data over 50 years, and another mountain of existing s

      • by gweihir ( 88907 )

        He probably has some skin in the game and has fallen to wishful thinking. Not the first time this happens to a former expert.

  • by ebunga ( 95613 ) on Friday February 28, 2025 @02:19PM (#65201659)

    That's what they're selling, that's what you're eating.

  • Nothing to water down when just redefining AI as it is now (nonexistant) with a classifier to be AGI already did all the watering. Whatever they have now is not the classic AI as defined by everyone.

    What the current AGI is doing though is contributing to global catastrophe, either through the ridiculous energy waste or trying to let 'AI' control anything important.
  • The guy who wrote "Superitelligence" a full decade ago, when even talking about AGI was crackpot thinking. So, being a pioneer in the field his definition of AGI deserves first consideration:

    University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest"

    https://en.wikipedia.org/wiki/... [wikipedia.org]

    • A decade ago? Ray Kurzweil was talking about it long before that, and he was only building on the work of scientists who came before him.
  • They make their company's product sound better than it really is.

    • by dvice ( 6309704 )

      No. Demis is not a marketer, he is an AI researcher, and has been for decades.

      Deepmind has much better products (AlphaFold) than what people think they have (Gemini, the chatbot). Most people are not even aware of what AlphaFold is, but everyone knows what ChatGPT is. Despite the fact that AlphaFold won a Nobel and solved the major problem in biology which humans have tried to solve for decades. Demis downplays his own work a lot, which is the opposite of what you suggested. Others are speaking about his wo

      • I wasn't referring to Demis, I was referring to the other companies he was talking about, who call their product "AGI" in their marketing efforts.

  • Never mind AGI, we don't even have a particular strong notion of what our own intelligence is.

    That said, just because you can't define something all that well doesn't mean I can draw a smiley face on Broccoli and call it a robot. LLMs show absolutely, positively, ZERO fundamental ability to reason, generalize, or compete with the thought process of a human. OpenAI is playing semantics to say: if software can produce X output from Y input in an economically competitive way, then whether it's "actually" intel

    • by dvice ( 6309704 )

      "Intelligence is a force that try to maximize future freedom of action and keep options open"
      F = T S
      F is Force of intelligence
      T is strength to maintain future actions
      S are possible accessible futures
      With diversity of future options S over time horizon

      Here is a video:
      https://www.youtube.com/watch?... [youtube.com]

  • I think the unspoken definition of AGI is "an AI that is intelligent enough that it could be relied upon to do any assigned job independently without requiring a knowledgeable human to verify its output".

    Which is a pretty high bar -- in most fields, most humans wouldn't meet it, either.

  • AGI used to be defined as anything that can pass the Turing Test. LLMs passed it a couple of years ago, so we shifted the goalposts (to something vague).

    Why not just admit that, when it comes to words, AI is as intelligent as the average human? (And remember that the median human is illiterate and has never read a book, yet, by definition, possess general intelligence)

    • by dvice ( 6309704 )

      I agree with you that AI is as or actually more intelligent than average human, but I think the point of AGI definition is that Deepmind wants to create an AI that can solve scientific problems. Or in other words, they want to create an AI can solve tasks and answer questions in such a way that it will split the task, search information, combine information, verify results, make hypothesis and create solutions, based on information it got from sources. Something that smart humans could perhaps do, but it wo

  • OpenAI has historically described it as a "highly autonomous system that outperforms humans at most economically valuable work."

    200 years ago, most of the population worked in agriculture. Today it's just a few percent, because agriculture became highly automated. Machines now outperform humans at "most economically valuable work", for the 200 year old definition of the phrase.

    100 years ago, a large part of the population worked in manufacturing. Today it's a lot less, because manufacturing became much more automated. Machines outperform humans at "most economically valuable work", for the 100 year old definition of the phrase.

    E

    • by dvice ( 6309704 )

      It is hard to think about jobs as there are thousands of them, it is easier to think what you can sell, as that is what makes a job worth doing. You can sell:
      - food (partially automated)
      - shelter (not yet automated, but more and more innovations are made)
      - healthcare (not yet, but in 10 years we might have cure for everything)
      - education (partially automated, but not taken into use)
      - entertainment (partially automated)
      - research (partially automated)
      - transportation (not yet, but not far)
      - energy (partially

To the systems programmer, users and applications serve only to provide a test load.

Working...