Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

Eric Schmidt Thinks AI Is As Powerful As Nukes 84

An anonymous reader quotes a report from Motherboard: Former Google CEO Eric Schmidt compared AI to nuclear weapons and called for a deterrence regime similar to the mutually-assured destruction that keeps the world's most powerful countries from destroying each other. Schmidt talked about the dangers of AI at the Aspen Security Forum at a panel on national security and artificial intelligence on July 22. While fielding a question about the value of morality in tech, Schmidt explained that he, himself, had been naive about the power of information in the early days of Google. He then called for tech to be better in line with the ethics and morals of the people it serves and made a bizarre comparison between AI and nuclear weapons.

Schmidt imagined a near future where China and the U.S. needed to cement a treaty around AI. "In the 50s and 60s, we eventually worked out a world where there was a 'no surprise' rule about nuclear tests and eventually they were banned," Schmidt said. "It's an example of a balance of trust, or lack of trust, it's a 'no surprises' rule. I'm very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failingwill allow people to say 'Oh my god, they're up to something,' and then begin some kind of conundrum. Begin some kind of thing where, because you're arming or getting ready, you then trigger the other side. We don't have anyone working on that and yet AI is that powerful."

Schmidt imagined a near future where both China and the U.S. would have security concerns that force a kind of deterrence treaty between them around AI. He speaks of the 1950s and '60s when diplomacy crafted a series of controls around the most deadly weapons on the planet. But for the world to get to a place where it instituted the Nuclear Test Ban Treaty, SALT II, and other landmark pieces of legislation, it took decades of nuclear explosions and, critically, the destruction of Hiroshima and Nagasaki. The two Japanese cities America destroyed at the end of World War II killed tens of thousands of people and proved to the world the everlasting horror of nuclear weapons. The governments of Russia and China then rushed to acquire the weapons. The way we live with the possibility these weapons will be used is through something called mutual assured destruction (MAD), a theory of deterrence that ensures if one country launches a nuke, it's possible that every other country will too. We don't use the most destructive weapon on the planet because of the possibility that doing so will destroy, at the very least, civilization around the globe.
"The problem with AI is not that it has the potentially world destroying force of a nuclear weapon," writes Motherboard's Matthew Gault. "It's that AI is only as good as the people who designed it and that they reflect the values of their creators. AI suffers from the classic 'garbage in, garbage out' problem: Racist algorithms make racist robots, all AI carries the biases of its creators, and a chatbot trained on 4chan becomes vile..."

"AI is a reflection of its creator. It can't level a city in a 1.2 megaton blast. Not unless a human teaches it to do so."
This discussion has been archived. No new comments can be posted.

Eric Schmidt Thinks AI Is As Powerful As Nukes

Comments Filter:
  • by thragnet ( 5502618 ) on Monday July 25, 2022 @04:39PM (#62733068)

    be immolated !

    • Me from 2010: "Recognizing irony is key to transcending militarism "
      https://pdfernhout.net/recogni... [pdfernhout.net]
      ========
      Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?

      Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sou

  • Is it that far fetched to worry more about humanity destroying itself from AI turning on us than from nuclear war. AI needs safeguards. We are close to building a pseudo-sentience that needs to be prevented from acting immorally (hurting people) out of self preservation.

    • by Anonymous Coward on Monday July 25, 2022 @04:47PM (#62733096)

      We are close to building a pseudo-sentience

      No we aren't.

      • We are closer to profitable fusion than a “strong” AI. Meat isn’t magic though, unless there is a technological collapse we will eventually achieve both. It’s just a matter of time.
        • by phantomfive ( 622387 ) on Monday July 25, 2022 @07:59PM (#62733468) Journal

          We are closer to profitable fusion than a “strong” AI.

          Methodologically, this is completely true. We have the theoretical parts needed for fusion, it's just an engineering problem to make it profitable. Whereas we don't have a theoretical path to AI.

          Of course, a breakthrough could happen in someone's garage any time, but we don't know when that will be.

          • by gweihir ( 88907 )

            We have the theoretical parts needed for fusion, it's just an engineering problem to make it profitable. Whereas we don't have a theoretical path to AI.

            Of course, a breakthrough could happen in someone's garage any time, but we don't know when that will be.

            We also know that artificial fusion is possible because there are examples. We know no such thing for AGI. As there is something fundamental missing for a path to AGI, there are several options: a) we find out it is impossible b) we find out it is possible but impractical c) we find out it is possible and practical. d) we find nothing out. Note that ), b) and c) require fundamental theoretical breakthroughs. These can be anywhere from 5 minutes to 100'000 years (or longer) away. Option d) is a real possibil

        • by gweihir ( 88907 )

          Meat isn’t magic though,

          That is your physicalist belief. Science says the question is open.

          • There has yet been one observation to demonstrate supernaturalism yet its extremely easy to verify in if true in many cases. Saying magical thinking can be real is like saying glass powder will be explosively expelled from the ground by a convergence of random forces including sound waves that blasts all the particles perfectly back into the shape of a glass and gives it the precise kinetic energy to fly back up on the table. Physics says it’s just as possible forwards as backwards but the odds of
    • Computers that control things need safeguards. Billionaires get drunk together talking about all the amazing things AI can and will be able to do, which is mostly play video games, or board games. Probably also just re watched the Matrix or Terminator or something.
    • This thread produces the great uncertainty that all those AI experts who are totally disdainful of the dangers of AI are not aware of its negative potentials. My fears are involved, not so much in the superior mental capabilities of AI but that its influences are now interconnected in everything we do. That the innocent redesign of a toothpick might wreck a subway system or cause cancer seems totally far fetched but may not be a fantasy. It's the old fairy tale problem of the three wishes and we all may en
    • by Mspangler ( 770054 ) on Monday July 25, 2022 @06:38PM (#62733310)

      Computers that can control things need mechanical interlocks to keep them in line. The idiot computer on the 737 Max being a fine example. The chemical plant has hardwired temperature switches and mechanical relief valves and limit switches to keep the control system from making a mess.

      Don't build any computer that does not have an off switch, a mistake they made on a Star Trek episode.

      https://en.wikipedia.org/wiki/... [wikipedia.org]

    • Is it that far fetched to worry more about humanity destroying itself from AI turning on us than from nuclear war. AI needs safeguards.

      When we have AI, it will need safeguards. We can barely model the intelligence of a worm.

      We are close to building a pseudo-sentience that needs to be prevented from acting immorally (hurting people) out of self preservation.

      [citation needed]

    • by gweihir ( 88907 )

      We are close to building a pseudo-sentience that needs to be prevented from acting immorally (hurting people) out of self preservation.

      No, we are not. We do not even know whether that is possible and we do not have any credible theory how it could work. Throwing more transistors at the problem did exactly nothing, as expected with anybody that actually has some understanding of the subject matter. It is not a question of computing power or "knowledge" database size either. Sure, the clueless can be fooled easier with that, but the nature of the machine does not change. Computer are about as capable of sentience, "pseudo-sentience", insight

  • by Anonymous Coward

    If AI is equivalent of nukes, the government should be regulating it heavily and taking it out of the hands of private corporations. After all you wouldn't want corporations to be more powerful than the government would you? Or do we?

    • If AI is equivalent of nukes, the government should be regulating it heavily and taking it out of the hands of private corporations. After all you wouldn't want corporations to be more powerful than the government would you? Or do we?

      It's not an entirely fair comparison, however, as private corporations have little or no need for nuclear weapons, so reserving them for the government has little societal impact. On the other hand, many corporations rely heavily on AI today. In fact, if you have a self-driving car, you have AI of your own. Removing it from the hands of people and private corporations would have a very disruptive effect on our daily lives.

      • You have AI if you have a self driving car?
        Is this like you have Windows? Big difference between being able to task AI with whatever you want and having it in a product with an already assigned task.

        However you can certainly get AI and give it a task, there are a lot of tools available for free you can run right now. You just need inputs and outputs, in that sense you have or can have AI today.
      • The thing is, most of the world's population has a lot of faith in the government to use its power in a way that is best for the people. For example, the US's Food and Drug Administration is often criticized for approving drugs that have the potential to cause significant harm to the population. A lot of people think that the FDA is just a rubber stamp for drug companies. However,it is the government's job to protect the population, and if the FDA doesn't approve a drug, that means that the FDA is deeming i

        • A government that doesn't keep the population safe has a lot of people who would rather opt out of that government.

          Which, in America, is done by voting those people out of office. In Czarist Russia, on the other hand, it is done by murdering the Czar.

          Different strokes for different folks.

    • This guy gets it. AI regulation is simply FAANG wanting to have a monopoly on it.

  • Ants (Score:4, Insightful)

    by packrat0x ( 798359 ) on Monday July 25, 2022 @04:59PM (#62733120)

    AI is as smart as ants, and less powerful.

    • Ants can self heal and replicate. Meanwhile, no matter how many computers I stack in a nice warm small space and give them some time alone I still just wind up with same number. On the plus side, I don’t have to worry about computers eating my shed. Yet.
    • by bgarcia ( 33222 )

      Current AI is nowhere near as smart as ants. Current AI is nothing but "fancy pattern recognition".

      You could throw ants into a new, unknown environment (like a house), and they would figure out how to make a colony and find food and survive. But you couldn't take a "chatbot" and expect it to play "chess", or take a chess-playing AI and expect it to figure out which pictures are of cats.

      • by ranton ( 36917 )

        You could throw ants into a new, unknown environment (like a house), and they would figure out how to make a colony and find food and survive. But you couldn't take a "chatbot" and expect it to play "chess", or take a chess-playing AI and expect it to figure out which pictures are of cats.

        Why the different criteria for ants and AI? Taking ants to a new unknown house is like taking a chess playing AI to a new chessboard they haven't seen. For your analogy to make sense, you would need to compare the AI to an ant learning to play chess or classify pictures of cats. That is what you are expecting of the AI.

    • by gweihir ( 88907 )

      AI is as smart as ants, and less powerful.

      Actually, ants are a lot smarter, both individually and as colony, at least when it comes to practical, real-time solutions that you can pack into something that is mobile.

  • by rsilvergun ( 571051 ) on Monday July 25, 2022 @05:12PM (#62733146)
    if you're a techy your whole world is tech, so you think of everything as tech. When you're only tool's a hammer every problem is a nail.

    I think I first noticed it with the "DiFi" crowd who wanted to use computer software to do stuff like automated loan approvals. Didn't take long for one of those DiFi outfits to get.... well I don't want to call it hacked because a 17 year old kid basically did an endrun around their algorithms and took $17 million from them. More like exploited.

    A.I. is much less of an issue than, say, the American SW water crisis or climate change in general. Or our complete lack of anti-trust law enforcement. Or the fact that our Republic came very close to becoming a monarchy less than 2 years ago. Or...

    I'm not saying it's not potentially a problem, but it's the kind of problem that, if we just solved other societal problems then AI wouldn't be a problem, it'd be a boon.
    • I'm not saying it's not potentially a problem, but it's the kind of problem that, if we just solved other societal problems then AI wouldn't be a problem, it'd be a boon.

      It really depends on the level of AI.

      * Right now, we're at the point where AI can be used to suppress specific kinds of speech online or delude the public with propaganda. This is very dangerous because it allows an individual to get certain kinds of people elected and others demonized. You aren't killing anyone because fools will do it on your behalf.

      * The creation of a general purpose AI will have massive global implications. A dictator could have an unflinching eye everywhere watching every second, thi

      • That's because one of the main things any dictatorship needs to do is find something for moderately intelligent or very intelligent people to do or kill them. In a modern society if you kill them like the Khmer rouge did your society collapses. So you have to come up with things for them to do. One of those things is wasting time monitoring and censoring speech.

        And make no mistake it's a complete waste of time. Any modern government that turns to dictatorship is going to easily take full control over me
        • If you don't like it and you're an American you've got one or maybe two election cycles to do something about it. The authoritarians are trying to win and once they do they will end democracy and kill our Republic. We all know who they are I don't even need to name them it's the Republican Party

          I'm as anti-republican as anyone around here, and more than most, but the idea that democrats are not authoritarians is a dumb one. The democrats aren't democratic either. If they were, they wouldn't have spoiled the primary for Sanders, he would have won the nominations, and like the polls said he could have beaten Trump.

          Election cycles are irrelevant to the problem of authoritarianism in America, because both of the parties we're offered are authoritarian.

          The only question is are we going to end the Republic

          Yeah, The Republic that was designed explicitly to

      • To say AI can be used (or is used) is basically to say computers can be used. AI is basically a programming technique. Also according to the article the AI takes on the bias of the programmer but even though the programmer might program such a thing in really it is bias in the input data that they are referring to.

        A general purpose AI is really a collection of AI's that can decide what task and then apply the appropriate AI for the task. Whatever can be done with AI can be done (and has/is being done) wi
      • by Deal In One ( 6459326 ) on Tuesday July 26, 2022 @03:44AM (#62734184)

        Another problem comparing AI to nukes and having a treaty to cover it :

        Nukes, when you test them for real, you get all sorts of tell-tale signatures - from the mushroom clouds for open air testing, to the radiation increases, etc (even with underground testing) which are relatively easy to detect, especially with satellites.

        How are you going to make sure that someone is following an AI Treaty (assuming they do sign such a treaty)? Have representatives stationed 24/7 at all supercomputer locations to check what sort of programs are running? You don't even know all the supercomputers in the world, unless they are revealed to you. And something like the Cerebras can do ML / AI stuff in one rack of computing that used to take a room of computers.

        Now imagine a room of Cerebras type machines. And nobody else is aware of them. How are you going to monitor those? Or other future breakthrus in computing which nobody else is aware of?

        It may be easy to sign such treaties. Figuring out if all parties are following the treaty is going to be next to impossible.

        • It may be easy to sign such treaties. Figuring out if all parties are following the treaty is going to be next to impossible.

          I would argue that's makes AI more dangerous than nuclear weapons because their development cannot detected or stopped. This means nations will be sure develop AI in order to keep up with opposing nations.

          Whoever creates a generalized AI will win. It may not be a military victory as it could easily be an economic victory, leaving other nations unable to compete. I could see a nation like China keeping such an invention secret in order to prevent any backlash from AI designed manufacturing processes that

          • You are thinking in terms of countries only. Think wider.

            Especially since you don't really need a "super power" to work on it.

            Maybe malware which uses your computing resources to work on ML stuff, so if they manage to infect alot of systems, they can get huge computing capabilities for "free" (latency may suck, and may need lots of redundancy but if you are lacking in resources, slower AI may still be better then no AI).

            Or maybe some kid hooks up a bunch of devices together as a beowulf cluster and creates

            • You are thinking in terms of countries only. Think wider.

              You are thinking in terms of fantasies. Think realistically.

              The underlying the structure required for a generalized AI may be discovered on accident but the construction of it will be entirely intentional.

    • Or the fact that our Republic came very close to becoming a monarchy less than 2 years ago.

      No. Even the people who were invading the capitol thought they were stopping tyranny. They were deluded of course, but very few Americans want monarchy. (For the ones who do want a monarchy, it's because they want to be the monarch).

      • To find how many Americans want a monarchy so long as they don't have to call it that and so long as they can pretend to go to the polls every 4 years. It's a monarchy similar to Russia's or North Korea's where technically there are elections but anyone who would actually be able to mount a challenge is disappeared. I think about 20% of Americans want that as long as it's their guy in charge. Fundamentally right wing politics is about hierarchies and a lot of people like the idea of a strong hierarchy becau
      • by ranton ( 36917 )

        No. Even the people who were invading the capitol thought they were stopping tyranny.

        You need to do more reading on how autocracies and fascists come to power (Monarchy is admittedly the wrong word though). You seem to think it is usually caused by mustache twirling super-villains. In reality, they are caused by normal moral people who think they are saving their nation from internal and/or external threats. Whether that be immigrants, Jews, or any number of made up or exaggerated threats.

        It is actually quite impressive that the US was able to weather the storm of the populist uprising that

        • You seem to think it is usually caused by mustache twirling super-villains.

          That is what Trump looks like to me.

    • A.I. is much less of an issue than, say, the American SW water crisis or climate change in general. Or our complete lack of anti-trust law enforcement. Or the fact that our Republic came very close to becoming a monarchy less than 2 years ago. Or...

      AGI with the ability to improve itself (the "singularity") is a far greater risk than any of those things. None of those can cause the extinction of the human race, while a superintelligence with slightly misaligned goals might well erase use just because we complicate its work... or just because it needs the mass that we depend on for life (water, oxygen, etc.).

      How likely is such a superintelligence? That's basically impossible to know. We don't know if intelligence can scale arbitrarily, or if there are

      • They have developed the amazing technology to manufacture novel viruses, like SARS-CoV-2 that causes Covid-19. Done in part using humanized mice! Incredible.

        It will be decades before computers become truly intelligent, and the virologists have a good head start. If they succeed in wiping out humanity first there will be no general artificial intelligence.

        I recon it is 70/30 that the AI will win, but the virologists are in with a good chance to stop it.

        www.orginofcovid.org

      • We don't know if intelligence can scale arbitrarily, or if there are some upper limits -- potentially not even much higher than human level.

        We know a lot more than you imply here.
        - We know that the difference in capabilities and competence between a particularly dumb human being and a particularly intelligent human being is huge.
        - We know that the difference in capabilities and competence between a particularly dumb human being and any of the primates is huge.
        - We know that the difference in hardware/wetware in both previous cases is tiny and not even remotely close to being some kind of freakish exception case in physics.

        Given the right insigh

      • The idiotic idea of a singularity is NOT what Eric Schmidt is talking about as far as AI.

  • by Big Bipper ( 1120937 ) on Monday July 25, 2022 @05:32PM (#62733178)
    Expertise in one field does not carry over into other fields. But experts often think so. The narrower their field of knowledge the more likely they are to think so. Lazarus Long ( Robert A. Heinlein )
    • And that's exactly why a journalist like Matthew Gault shouldn't be relied upon when they dismiss a risk that a wide variety of experts think has the potential to annihilate humanity [wikipedia.org].

    • by gweihir ( 88907 )

      Expertise in one field does not carry over into other fields. But experts often think so. The narrower their field of knowledge the more likely they are to think so. Lazarus Long ( Robert A. Heinlein )

      Very true. To be an expert in two fields, you need ... to become an expert in two fields. Sure, there are meta-skills like general scientific, engineering or analytical approaches, but they only speed up the process of becoming an expert in another field. There are a lot of things you still need to learn.

      I do not think Eric Schmidt is an expert for either nukes or AI though.

  • So, he's asking for people/countries/orgs that don't have AI already to be banned from getting any? Huh. The "I Got Mine" attitude.

  • giant apples to giant oranges.

  • ... mutual assured destruction (MAD), a theory of deterrence ...

    Except countries worked to prevent MAD and even declare a nuclear holocaust as "winnable". The world-wide damage to the planet means nuclear war doesn't decide who won, it decides who survived.

  • https://en.wikipedia.org/wiki/... [wikipedia.org] We gotta make drones to fight the drones.
  • Read this 1988 Sci-Fi classic by Marc Stiegler. https://www.amazon.com/Davids-... [amazon.com] In this terrific visionary novel US weapon developers use off-the-shelf parts to create "decapitating" attacks with smart drones that simply attack and destroy the chain of command. Now think about that. What has happened since 1988? Check my math here - Per Moore's Law, a desktop CPU today is 6.7 million times more powerful today than they were in 1988, i.e. 34 years * 12 months / 18 months = about 22.667 doubling times.
  • by sonoronos ( 610381 ) on Tuesday July 26, 2022 @01:47AM (#62734002)

    I hate how these inane tech bloggers start with some legitimate quote from a legitimate source and then try to insert their own woke claptrap in order to sell their point of view.

    Going from the existential danger of US - China relations leading to mutually assured destruction over AI gone wrong , straight to facial recognition systems not being able to accurately identify some faces over others, not only trivializes the actual subject of the article, but displays the vapid self centeredness and self-referentialism of journalism today.

    Or perhaps, more fittingly, this Matthew Gault fellow has just demonstrated the same ignorance that Eric Schmidt is warning against - that banal politicizing blinds us from true mortal dangers.

  • A bloated hunk of intelligence simulation code incinerates a major city with all of its people, and leaves millions more blinded and burned in the surrounding miles, and does it without using anything nuclear (if it happens with nukes then it's just an example of the power of nukes in the hands of reckless humans who deployed the AI).

    This is the problem with this sort of drivel. Eric Schmidt has never been particularly intelligent - more of a guy being in the right place at the right time. Idiots like schmi

  • Racist algorithms make racist robots, all AI carries the biases of its creators, and a chatbot trained on 4chan becomes vile...

    Biases indeed, lol

    There will, of course, be no AIs camping out outside of Supreme Court Justice's homes or anything like that ... or if they do, it will just be the product of sweet reason ...

  • by henrik stigell ( 6146516 ) on Tuesday July 26, 2022 @08:36AM (#62734680) Homepage
    "It's that AI is only as good as the people who designed it and that they reflect the values of their creators. AI suffers from the classic 'garbage in, garbage out' problem"

    No, a good AI is unpredictable, that is the whole point. Woke journalists should follow Pliny the Elder's advice, Sutor, ne ultra crepidam, and write about scandals in the influencer world or something like that.

    • Agreed. A powerful enough AI, and its not known what is going on inside. Its a blackbox. That's a downside because if it doesn't work as desired, might be difficult to find out why.

    • No, no, no. This is a commonly false understanding of AI, propagated by people who read too much science fiction. While explainability is often a question, the fundamental reality is that AI does not go beyond itâ(TM)s designed intent. It is executing an objective function by definition, and that objective function is mathematically well understood.

      The danger of AI that Eric Schmidt is talking about is that it is totally, completely based on the intent of its implementation. There is no magic super int

  • What AI. I have yet to see anything with AI in it.
    Even thing advertised as AI. I see nothing but a very dense IF tree.
  • Did Schmidt just watch Chappie (2015)?

  • I used to respect Eric Schmidt.

  • I won't know why people imagine AI is dangerous is some nebulous future way. It's obvious that autonomous tanks, drones, subs and other killing machines can create a very lopsided battlefield. No breakthrough in the technology are required, just the work to implement the systems.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...