Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
AI

Geoffrey Hinton Says There is 10-20% Chance AI Will Lead To Human Extinction in 30 Years (theguardian.com) 127

The British-Canadian computer scientist often touted as a "godfather" of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is "much faster" than expected. From a report: Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a "10 to 20" per cent chance that AI would lead to human extinction within the next three decades.

Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity. Asked on BBC Radio 4's Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: "Not really, 10 to 20 [per cent]."

This discussion has been archived. No new comments can be posted.

Geoffrey Hinton Says There is 10-20% Chance AI Will Lead To Human Extinction in 30 Years

Comments Filter:
  • by Baron_Yam ( 643147 ) on Friday December 27, 2024 @01:28PM (#65043131)

    The threat is not Skynet, it's universal economic disruption combining with cultural inertia causing societal collapse.

    That means life will suck for the vast majority if they don't figure it out pretty quickly. It does not mean extinction of our species.

    • by Big Hairy Gorilla ( 9839972 ) on Friday December 27, 2024 @02:02PM (#65043235)
      Why the ad hominem attack?
      Afaict, your suggestion is well within the scope of what he's saying.
      One thing you don't have to be a nobel prize winner to see: AI is made for impersonation, so it's pretty easy to see how AI fraud could scale up to some kind of financial disaster.. who knows? Stock market collapse? Encryption broken and sovereign funds drained? All seems fairly reasonable to happen.. I think you just said that.

      Your comment isn't rocket surgery either, but I'll refrain from calling you an idiot.
      • None of the things you mentioned result in extinction level events. Misery, population reduction, etc, but not extinction.

        Calling him an idiot was the polite option. Otherwise, he's a cynical amoral greedy bastard fear mongering for the attention and profit that brings.

        • by drnb ( 2434720 )

          None of the things you mentioned result in extinction level events. Misery, population reduction, etc, but not extinction.

          The 95% who died in the crisis may not fully appreciate the semantic games. In spirit, since they are dead.

          If we can call climate change existential then we can call rouge AI existential.

      • by dfghjk ( 711126 )

        AI doesn't commit fraud, humans do.

        "All seems fairly reasonable to happen."
        If humans do it. And if that triggers human extinction, if would be humans that caused it, not AI.

        People need to grow up, AI is not a bogeyman, it's a computer program. When is the lizard brain going to stop being in charge?
        The human race will not go quietly into the night, your Cybertruck isn't going to save you from billions of pitchforks.

        • People need to grow up, AI is not a bogeyman, it's a computer program.

          And computer programs never harm people?

          "A 2019 Ethiopian Airlines plane crash which killed 157 people was caused by a flight software failure as suspected, the country's transport minister said Friday citing the investigators' final report. ... Both accidents saw uncontrolled drops in the aircraft's nose in the moments before the planes crashed, which investigators have blamed on the model's anti-stall flight system, the Maneuvering Characteristics Augmentation System, or MCAS."
          https://www.barrons.com/ [barrons.com]

      • Why the ad hominem attack?

        Insults are not inherently Ad Hominem, stop using phrases you don't understand. Ad Hominem is when you assert that something is [un]true because insult.* Which is why if I called your claim stupid, it would also not be Ad Hominem.

        Your comment isn't rocket surgery either, but I'll refrain from calling you an idiot.

        Good thing, since you've already demonstrated that you don't know what you're talking about.

        * This is not strictly true either, but in this context, it is the definition that matters. This is actually abusive ad hominem, which is just one of several kinds of ad hominem fallacy.

        • by ceoyoyo ( 59147 )

          The OP did not call the claim stupid. He called the man stupid. To be absolutely precise, he said "sounds like an idiot to me."

          You could argue that statement was purely gratuitous and had nothing to do with the OPs argument, but that's not really how it would normally be interpreted. Especially since the OP's post didn't actually contain an argument but rather just the quoted statement in the subject line plus their own pet theory.

          • The OP did not call the claim stupid. He called the man stupid. To be absolutely precise, he said "sounds like an idiot to me."

            That is irrelevant to the argument, since it's not Ad Hominem either way.

        • wooo... pedantry at it's most trivial... like you really didn't understand how I was using it?
          I'll look up ad hominem and make sure to use it correctly ...
          The American Heritage® Dictionary of the English Language, 5th Edition
          ad hominem /hŏm′ə-nĕm″, -nəm/
          adjective
          Attacking a person's character or motivations rather than a position or argument.

          Looks like I used it correctly after all...

          in case you don't understand, I
    • There is prolly a 10% chance that we will actually have AI in 30 years
      • We have AI right now. By definition. (The "A" in "AI" stands for "artificial," which means "fake." So it doesn't have to actually be intelligent in order to qualify as AI).

        We sure don't have intelligent machines now. We have not achieved "synthetic intelligence." I would also say that we have not achieved "AGI" but that term just got re-defined in a money-focused way that says nothing about intelligence, so it's now worthless.

        Also I don't know if we have a 10% chance of having intelligent machines in 3

        • >"The "A" in "AI" stands for "artificial," which means "fake." So it doesn't have to actually be intelligent in order to qualify as AI"

          I think that depends on definitions.

          "Artificial" generally doesn't mean "fake" https://ahdictionary.com/word/... [ahdictionary.com]

          a. Made by humans, especially in imitation of something natural
          b. Not arising from natural or necessary causes

          So AI = "Intelligence made by humans" or "Intelligence not arising from natural causes". It still requires there to be intelligence.

          https://ahdictionar [ahdictionary.com]

          • Right there in your own definition:

            a. Made by humans, especially in imitation of something natural.

            Do I need to quote even more definitions to point out that something "imitates" something when it is not that thing (as in, you know, imitating intelligence when something is not intelligent?)

            And since we are quoting dictionaries, how about Merriam-Webster?

            1: the capability of computer systems or algorithms to imitate intelligent human behavior
            2: a branch of computer science dealing with the simulation of inte

            • >"While I do agree with your statement that this "depends on definitions," it so happens that the English language does not have an ultimate authority on what definitions are."

              Words also often impart different flavors. For example, if in reference to some in pain, I said "This pill is a fake opioid" that would probably relay it was a placebo and not effective for its intended purpose. But if I said "This pill is an artificial opioid", one would think it is effective but just not natural.

              >"People who

            • by Bert64 ( 520050 )

              a. Made by humans, especially in imitation of something natural.

              A kid pretending to be a dog could fit this definition...

            • The problem isn't the artificial part, it's the intelligent part.
    • > it's universal economic disruption combining with cultural inertia causing societal collapse.

      This.

      We're in a perfect storm, because at the same time, climate breakdown or rather rapid climate change is also going to lead to societal collapse.

      Multiple events happening almost simultaneously could in fact result in civilization collapse, rather than societal, although the two are somewhat intertwined.

      Any thinking person born within the last century who has had the benefit of education has known, since bei

    • by Paul Fernhout ( 109597 ) on Friday December 27, 2024 @03:25PM (#65043453) Homepage

      By me from 2010: https://pdfernhout.net/recogni... [pdfernhout.net]
      "The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream. We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working"). ... Still, we must accept that there is nothing wrong with wanting some security. The issue is how we go about it in a non-ironic way that works for everyone."

      Some more solutions collected by me also circa 2010:
      https://pdfernhout.net/beyond-... [pdfernhout.net]
      "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."

    • The threat is not Skynet, it's universal economic disruption

      For example self-driving trucks not transporting food. Critical services need human oversight, no less than military weapons.

    • by hey! ( 33014 )

      I think he is overstating the case, because people often conflate "humanity" with "civilization", as if you can't have humanity without civilization. We absolutely can have humanity without civilization, it's the ground state to which our species has returned time and time again.

      Every past civilization has collapsed, and if you had to generalize to cover every such collapse, it'd go like this: civilizations collapse when they experience changes they can't adapt to. In some cases that handwaving is carryin

    • The threat is not Skynet, it's universal economic disruption combining with cultural inertia causing societal collapse.

      IMO, the biggest threat is neither of those things. The threat is a vastly more intelligent species living on our planet with us, a species that doesn't share our need for breathable air or clean water, has goals of its own and doesn't care one way or the other about us, except to the degree we get in its way (which isn't much; we won't be capable of seriously interfering with it).

      Look at the extinction rate we cause in other species... and we actually do need an environment that is compatible with them,

    • The threat is not Skynet, it's universal economic disruption combining with cultural inertia causing societal collapse.

      That's probably true. But I would argue that both the cultural inertia and the economic fuckery you mentioned are being accelerated by so-called AI.

      It does not mean extinction of our species.

      The climatic impact of all that power used to run AI server farms could bring us close to the brink. It may not result in our extinction, but I wouldn't be placing any bets on the continued viability of modern civilization.

  • by TheStatsMan ( 1763322 ) on Friday December 27, 2024 @01:29PM (#65043135)

    Because of the exorbitant cost of the energy to use AI, it's much more likely we'll simply be unable to keep pace. A lot of people will die when we run out of cheap energy, but it's not an extinction event - just a simplification.

    • AI is not what the movies tell people it is, and all the doomsday predictions are indications that these people either dont know what it is, or adore attention in media.
      • by dbialac ( 320955 )
        A recent lab experiment showed an AI LLM moving itself from one computer to another and trying to hide itself on the other computer while it tried to solve the problem assigned to it to solve. There was an article posted here recently talking about it. SkyNet may already be here.
    • AI does not take exorbitant amounts of energy to use, it takes exorbitant amounts of energy to *train*. The results of many models can run on smartphones. The issue that makes it unlikely is the incredibly amount of energy required to increase training set data, and limits in the size of the models.

      Just for an example, I can train an AI to do a face swap with a custom model for me. It will take my computer well over an entire weekend crunching at full tilt (yes I've done it), and the result is applied in re

      • This is a distinction without a difference, because without training, you have no model and therefor can't use it. Nor will these companies stop training new models.

    • Because of the exorbitant cost of the energy to use AI, it's much more likely we'll simply be unable to keep pace. A lot of people will die when we run out of cheap energy, but it's not an extinction event - just a simplification.

      Nah. AI training will get more efficient. We know it can be done on a very small energy budget; humans do it on a few hundred kilocalories per day, and that on the pretty inferior hardware evolution ginned up for us.

  • by brunes69 ( 86786 ) <slashdot@keirst[ ].org ['ead' in gap]> on Friday December 27, 2024 @01:32PM (#65043149)

    People read headlines like this and immediately assume Hinton is talking about a Terminator-style scenario. He isn't.

    He is talking about *all possible implications* of the rapid advance of AI.

    The most likely scenario we are going to find ourselves in over the next 10 years is not a Terminator style scenario, it is a scenario such as depicted in Marshall Brain's short story "Manna", where the outcome of development of a sufficiently powerful AI (which, by the way, is not even as capable as today's LLMs in the book) results in an *economic collapse* in countries where governments did not sufficiently plan for the outcome.

    https://marshallbrain.com/mann... [marshallbrain.com]

    • by Baron_Yam ( 643147 ) on Friday December 27, 2024 @01:51PM (#65043197)

      Economic collapse will not cause extinction. It will cause loss of advanced technology and massive, but not complete, loss of life.

      The Terminator scenario is the only AI future that results in actual extinction. The other things that could kill us all are either us (not needing AI's help), or Nature. Giant meteor, supernova, expanding Sun, etc.

      • Re: (Score:3, Insightful)

        by brunes69 ( 86786 )

        Economic collapse certainly could lead to extinction. If you don't think that economic collapse would come with massive social unrest and war, then you don't seem to know a thing about humanity.

        • If you live in a fantasy land where starvation and war can wipe out all of humanity, you're probably not rational enough to convince otherwise.

          • No advancement in the history of human civilization has ever lead to even a reduction in economic activity, let alone a collapse. Computers, the engine, the plow, electricity, communications, automobiles, etc. On the contrary, economic activity on the whole has perpetually increased decade after decade, and so has the opportunity for the average man. We are in part and on average richer with more opportunities than at any time in the past, and that has ALWAYS been the case; even after world wars and pandemi
            • by dfghjk ( 711126 )

              AI isn't necessarily an "advancement". Any more than Facebook is.

              "We are in part and on average richer with more opportunities than at any time in the past"
              Who is "we"? large parts of the population are not. Growth in wealth is extremely out of balance.

              "These points, while factually true, unfortunately do not make for a click-baity article though."
              And don't contribute to the conversation either. The Great Depression was a pretty significant event, but nowhere near 90%. It's difficult to have any respect

            • I see someone skipped a lot of history classes.
          • Historically, no war could not wipe out humanity. Humanity has also never tested out full on nuclear war. That could be an extinction level event.
            • Yes, and if you have an ABC war - "Atomic, Biological, Chemical" who knows? (Heinlein coined that term in Farnham's Freehold, I think it's appropriate.
      • by 0xG ( 712423 )

        How about an AI-designed drug (for "longevity"?) that kills everyone after 10 years.
        A faulty reactor design which fails in the worst possible way.
        Massive crop failures due to AI-designed pesticides.
        There are many scenarios.

    • As Marshall Brain recently died by suicide. Given what's happening with AI you'd think he'd want to stick around to see if his predictions came true. https://arstechnica.com/ai/202... [arstechnica.com]
      • by brunes69 ( 86786 )

        Wow, I had no idea he died. I didn't follow him to any degree.

        The story you linked is very suspicious. I hope that someone is held accountable for this.

    • by gweihir ( 88907 )

      Well, we have one such event in the future: The collapse of Microsoft. Whether it has a connection to AI will remain to be seen. But that is on a level that some countries will descent into chaos while others will not be overly bothered (those that understand you should never be critically dependent on a single supplier you cannot replace for anything). But even that will not cause human extinction.

    • He's talking about a number of scenarios that include but are not limited to Skynet, where humans cease to be the dominant faction on the planet in favor of AI:

      He added: âoeAnd how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. Thereâ(TM)s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but thatâ(TM)s about the only example I know of.â

      London-born Hinton, a profes

    • Hinton is not qualified to make a 10%-20% estimate. He's a narrow ML researcher, his claim to fame is tinkering with neural networks his whole life, which belatedly led to a Nobel prize. He knows almost nothing about history, or economics, or any number of other aspects of the human condition. His number is a pure WAG with an uncertainty on the order of 1000%.

      The fact is that human society is a complex system: It's not possible to extrapolate to the whole from local trends in one single area of technology

  • So if it happens the AI can add the entry to Wikipedia's list of inventors killed by their own inventions [wikipedia.org].
  • Unforseen consequences? A nearly 100% chance.

  • New means of communication lead to periods of war. The Gutenberg press led to two hundred years of religious wars. Mass communication by radio and the press led to 20th century wars. The internet/AI may well lead to another round of wars. Even if the next war is nuclear, it will not lead to the extinction of mankind. It may lead to a collapse of our civilization and a severe diminishment of our population. Some will survive. We are too clever to be wiped out.
    • by gweihir ( 88907 )

      That happens to be nonsense. Larger and smaller populations of humans got wiped out in human history. 3 competing homo-something got wiped out. As much as everything is connected today and with a basically complete lack of isolated, self-sufficient communities, extinction is a real threat. And remember, about 40k of interbreeding humans is about the current minimum or it is over just 1000 years later or so. That number used to be much lower.

  • Seriously. Sounds to me this person (yes, I know who he is and what he has done) does not understand AI or rather the "A no-I" we have today and will be having for the foreseeable future.

    Obviously, Trump or Putin could get so enraged enough because of, say, copilot, that they trigger the launches and make that extinction happen. But that is about the only available mechanisms. Not even all the efforts we currently do to make climate change a real ELE catastrophe can be increased to that level.

  • It'll be people fobbing off tasks to AI and then ignoring it. Due to the flaws of what we call "AI" today (LLM crap like chatbots and their wonderful "hallucinations"), it will inevitably fail, and some important infrastructure will break. By that point, it is likely that no human (or very few) will be able to understand what the fuck broke and how to fix it. Then people will die from starvation, since they also fobbed off grocery shipments and cooking to AI.
  • The chance of us extincting ourselves within the next 30 years is, given the apparent lack of NI, presumably higher.
  • And profit from those odds? Is there a robot bookie who will take the bet and pay off?

  • There have been five major extinction events in Earth's history and one currently that's happening (caused by humans). Then, for the sake of quantifying Hinton's claim, let's consider the asteroid that caused the extinction of the dinosaurs 66 million years ago. It was caused by an asteroid that generated 10^23 joules of energy. This is approximately 1 billion times the energy that was released during the nuclear bombing of Hiroshima.

    But, in order to create discrete bounds, let's assume that this is a scale

  • AI is going to devour jobs. Specifically the middle class jobs that nuclear equipped countries need to have functioning economies.

    As their economies implode they'll do what all failing empires do: put lunatics in charge ala Nero burning Rome.

    That'll cause those empires to look for military expansion [youtube.com] because they'll need to rob other nations to fill their dwindling coffers, again just like Rome did / tried to do.

    The difference is Rome and it's surrounding people didn't have nukes. We do. And I co
    • You seem to be someone who actually take into account how far we've come in our ability to carelessly cause truly catastrophic outcomes for humanity with little oversight or consideration. How long will it be before somebody working in a garage lab with commercially available equipment designs a virus that's only supposed to wipe out people with the wrong skin colour, only to find out we really are NOT all that different? How long before some billionaire tech bro unilaterally decides the latest iteration

  • What can we do to raise those odds?

    • By encouraging the AI in its cleansing mission, for example by writing in comments about the true nature of homon sapiens, the most aggressive arrogant expansive ruthless species in this part of the milky way.
  • He's right every time.
  • Perfect timing. So long, suckers!
  • Oh hey, look there, another "scientist" predicting the end of the world. If I had a nickle for every one of these, I'd be a billionaire.
  • On the historical scoreboard of people predicting end of the world scenarios, humans have
    managed to be wrong in every single prediction since the dawn of our existence.

    Every - single - prediction.

    Read into that what you will then assign what level of anxiety you think we should have when
    it comes to the doomsayers and their opinions on Artificial Intelligence.

  • Trumptards say hold my beer!
  • We're all in the biggest game of musical chairs in history. Everybody better get ready for when the music pauses the next time. Nobody else cares if you live or die, especially not our government. We're just cogs to get other people to pay them. Anything else and they couldn't care less.

    • Life is a game of musical chairs, just like nature. Humans will be fine even if our future is back to the caves.
  • What drives me crazy about the x-risk crowd everyone just talks out their ass. They have no objective basis, data or evidence to inform what they are saying and they openly admit it.

  • There is no way what we laughingly call "AI" could possibly wipe out humans unless we were stupid enough to rely on AI to control oh yeah, I get it now.

  • ... we were a bootloader all along
  • We have a massive variation in mindsets and ideas for this very reason, the ol and good "software side evolution" where all ideas are tried, and some succeed regardless of the situation.
    Even if we find ourselves on a bizarre situation where you need people that want to have sex with trains, we have people for it.

    And in a case of a complete technological nightmare collapse, we have luddites, we have the amish, we have people living in complete isolation from everything..

  • Hinton is great. On this topic, he is full of shit.

The clearest way into the Universe is through a forest wilderness. -- John Muir

Working...