Eric Schmidt Thinks AI Is As Powerful As Nukes 84
An anonymous reader quotes a report from Motherboard: Former Google CEO Eric Schmidt compared AI to nuclear weapons and called for a deterrence regime similar to the mutually-assured destruction that keeps the world's most powerful countries from destroying each other. Schmidt talked about the dangers of AI at the Aspen Security Forum at a panel on national security and artificial intelligence on July 22. While fielding a question about the value of morality in tech, Schmidt explained that he, himself, had been naive about the power of information in the early days of Google. He then called for tech to be better in line with the ethics and morals of the people it serves and made a bizarre comparison between AI and nuclear weapons.
Schmidt imagined a near future where China and the U.S. needed to cement a treaty around AI. "In the 50s and 60s, we eventually worked out a world where there was a 'no surprise' rule about nuclear tests and eventually they were banned," Schmidt said. "It's an example of a balance of trust, or lack of trust, it's a 'no surprises' rule. I'm very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failingwill allow people to say 'Oh my god, they're up to something,' and then begin some kind of conundrum. Begin some kind of thing where, because you're arming or getting ready, you then trigger the other side. We don't have anyone working on that and yet AI is that powerful."
Schmidt imagined a near future where both China and the U.S. would have security concerns that force a kind of deterrence treaty between them around AI. He speaks of the 1950s and '60s when diplomacy crafted a series of controls around the most deadly weapons on the planet. But for the world to get to a place where it instituted the Nuclear Test Ban Treaty, SALT II, and other landmark pieces of legislation, it took decades of nuclear explosions and, critically, the destruction of Hiroshima and Nagasaki. The two Japanese cities America destroyed at the end of World War II killed tens of thousands of people and proved to the world the everlasting horror of nuclear weapons. The governments of Russia and China then rushed to acquire the weapons. The way we live with the possibility these weapons will be used is through something called mutual assured destruction (MAD), a theory of deterrence that ensures if one country launches a nuke, it's possible that every other country will too. We don't use the most destructive weapon on the planet because of the possibility that doing so will destroy, at the very least, civilization around the globe. "The problem with AI is not that it has the potentially world destroying force of a nuclear weapon," writes Motherboard's Matthew Gault. "It's that AI is only as good as the people who designed it and that they reflect the values of their creators. AI suffers from the classic 'garbage in, garbage out' problem: Racist algorithms make racist robots, all AI carries the biases of its creators, and a chatbot trained on 4chan becomes vile..."
"AI is a reflection of its creator. It can't level a city in a 1.2 megaton blast. Not unless a human teaches it to do so."
Schmidt imagined a near future where China and the U.S. needed to cement a treaty around AI. "In the 50s and 60s, we eventually worked out a world where there was a 'no surprise' rule about nuclear tests and eventually they were banned," Schmidt said. "It's an example of a balance of trust, or lack of trust, it's a 'no surprises' rule. I'm very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failingwill allow people to say 'Oh my god, they're up to something,' and then begin some kind of conundrum. Begin some kind of thing where, because you're arming or getting ready, you then trigger the other side. We don't have anyone working on that and yet AI is that powerful."
Schmidt imagined a near future where both China and the U.S. would have security concerns that force a kind of deterrence treaty between them around AI. He speaks of the 1950s and '60s when diplomacy crafted a series of controls around the most deadly weapons on the planet. But for the world to get to a place where it instituted the Nuclear Test Ban Treaty, SALT II, and other landmark pieces of legislation, it took decades of nuclear explosions and, critically, the destruction of Hiroshima and Nagasaki. The two Japanese cities America destroyed at the end of World War II killed tens of thousands of people and proved to the world the everlasting horror of nuclear weapons. The governments of Russia and China then rushed to acquire the weapons. The way we live with the possibility these weapons will be used is through something called mutual assured destruction (MAD), a theory of deterrence that ensures if one country launches a nuke, it's possible that every other country will too. We don't use the most destructive weapon on the planet because of the possibility that doing so will destroy, at the very least, civilization around the globe. "The problem with AI is not that it has the potentially world destroying force of a nuclear weapon," writes Motherboard's Matthew Gault. "It's that AI is only as good as the people who designed it and that they reflect the values of their creators. AI suffers from the classic 'garbage in, garbage out' problem: Racist algorithms make racist robots, all AI carries the biases of its creators, and a chatbot trained on 4chan becomes vile..."
"AI is a reflection of its creator. It can't level a city in a 1.2 megaton blast. Not unless a human teaches it to do so."
Don't be evil... (Score:3)
be immolated !
More like "Don't be ironic" (Score:2)
Me from 2010: "Recognizing irony is key to transcending militarism "
https://pdfernhout.net/recogni... [pdfernhout.net]
========
Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sou
AI needs safeguards (Score:1)
Is it that far fetched to worry more about humanity destroying itself from AI turning on us than from nuclear war. AI needs safeguards. We are close to building a pseudo-sentience that needs to be prevented from acting immorally (hurting people) out of self preservation.
Re:AI needs safeguards (Score:4, Informative)
We are close to building a pseudo-sentience
No we aren't.
Re: (Score:2)
Re:AI needs safeguards (Score:5, Interesting)
We are closer to profitable fusion than a “strong” AI.
Methodologically, this is completely true. We have the theoretical parts needed for fusion, it's just an engineering problem to make it profitable. Whereas we don't have a theoretical path to AI.
Of course, a breakthrough could happen in someone's garage any time, but we don't know when that will be.
Re: (Score:2)
We have the theoretical parts needed for fusion, it's just an engineering problem to make it profitable. Whereas we don't have a theoretical path to AI.
Of course, a breakthrough could happen in someone's garage any time, but we don't know when that will be.
We also know that artificial fusion is possible because there are examples. We know no such thing for AGI. As there is something fundamental missing for a path to AGI, there are several options: a) we find out it is impossible b) we find out it is possible but impractical c) we find out it is possible and practical. d) we find nothing out. Note that ), b) and c) require fundamental theoretical breakthroughs. These can be anywhere from 5 minutes to 100'000 years (or longer) away. Option d) is a real possibil
Re: (Score:2)
. d) we find nothing out
That's a little depressing.
Re: (Score:2)
Meat isn’t magic though,
That is your physicalist belief. Science says the question is open.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Re:AI needs safeguards (Score:4, Insightful)
Computers that can control things need mechanical interlocks to keep them in line. The idiot computer on the 737 Max being a fine example. The chemical plant has hardwired temperature switches and mechanical relief valves and limit switches to keep the control system from making a mess.
Don't build any computer that does not have an off switch, a mistake they made on a Star Trek episode.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
I'll raise with Colossus: The Forbin Project [wikipedia.org].
Re: (Score:2)
Is it that far fetched to worry more about humanity destroying itself from AI turning on us than from nuclear war. AI needs safeguards.
When we have AI, it will need safeguards. We can barely model the intelligence of a worm.
We are close to building a pseudo-sentience that needs to be prevented from acting immorally (hurting people) out of self preservation.
[citation needed]
Re: (Score:2)
We are close to building a pseudo-sentience that needs to be prevented from acting immorally (hurting people) out of self preservation.
No, we are not. We do not even know whether that is possible and we do not have any credible theory how it could work. Throwing more transistors at the problem did exactly nothing, as expected with anybody that actually has some understanding of the subject matter. It is not a question of computing power or "knowledge" database size either. Sure, the clueless can be fooled easier with that, but the nature of the machine does not change. Computer are about as capable of sentience, "pseudo-sentience", insight
Regulations... (Score:1)
If AI is equivalent of nukes, the government should be regulating it heavily and taking it out of the hands of private corporations. After all you wouldn't want corporations to be more powerful than the government would you? Or do we?
Re: (Score:3)
If AI is equivalent of nukes, the government should be regulating it heavily and taking it out of the hands of private corporations. After all you wouldn't want corporations to be more powerful than the government would you? Or do we?
It's not an entirely fair comparison, however, as private corporations have little or no need for nuclear weapons, so reserving them for the government has little societal impact. On the other hand, many corporations rely heavily on AI today. In fact, if you have a self-driving car, you have AI of your own. Removing it from the hands of people and private corporations would have a very disruptive effect on our daily lives.
Re: (Score:1)
Is this like you have Windows? Big difference between being able to task AI with whatever you want and having it in a product with an already assigned task.
However you can certainly get AI and give it a task, there are a lot of tools available for free you can run right now. You just need inputs and outputs, in that sense you have or can have AI today.
Re: (Score:2)
The thing is, most of the world's population has a lot of faith in the government to use its power in a way that is best for the people. For example, the US's Food and Drug Administration is often criticized for approving drugs that have the potential to cause significant harm to the population. A lot of people think that the FDA is just a rubber stamp for drug companies. However,it is the government's job to protect the population, and if the FDA doesn't approve a drug, that means that the FDA is deeming i
Re: (Score:2)
A government that doesn't keep the population safe has a lot of people who would rather opt out of that government.
Which, in America, is done by voting those people out of office. In Czarist Russia, on the other hand, it is done by murdering the Czar.
Different strokes for different folks.
Re: (Score:2)
Regardless of the despicability of the methods and outcome, you have to admit it was effective at getting them out of office.
Re: (Score:1)
This guy gets it. AI regulation is simply FAANG wanting to have a monopoly on it.
Ants (Score:4, Insightful)
AI is as smart as ants, and less powerful.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Current AI is nowhere near as smart as ants. Current AI is nothing but "fancy pattern recognition".
You could throw ants into a new, unknown environment (like a house), and they would figure out how to make a colony and find food and survive. But you couldn't take a "chatbot" and expect it to play "chess", or take a chess-playing AI and expect it to figure out which pictures are of cats.
Re: (Score:2)
You could throw ants into a new, unknown environment (like a house), and they would figure out how to make a colony and find food and survive. But you couldn't take a "chatbot" and expect it to play "chess", or take a chess-playing AI and expect it to figure out which pictures are of cats.
Why the different criteria for ants and AI? Taking ants to a new unknown house is like taking a chess playing AI to a new chessboard they haven't seen. For your analogy to make sense, you would need to compare the AI to an ant learning to play chess or classify pictures of cats. That is what you are expecting of the AI.
Re: (Score:2)
AI is as smart as ants, and less powerful.
Actually, ants are a lot smarter, both individually and as colony, at least when it comes to practical, real-time solutions that you can pack into something that is mobile.
I see this same reasoning here on /. all the time (Score:5, Insightful)
I think I first noticed it with the "DiFi" crowd who wanted to use computer software to do stuff like automated loan approvals. Didn't take long for one of those DiFi outfits to get.... well I don't want to call it hacked because a 17 year old kid basically did an endrun around their algorithms and took $17 million from them. More like exploited.
A.I. is much less of an issue than, say, the American SW water crisis or climate change in general. Or our complete lack of anti-trust law enforcement. Or the fact that our Republic came very close to becoming a monarchy less than 2 years ago. Or...
I'm not saying it's not potentially a problem, but it's the kind of problem that, if we just solved other societal problems then AI wouldn't be a problem, it'd be a boon.
Re: (Score:2)
I'm not saying it's not potentially a problem, but it's the kind of problem that, if we just solved other societal problems then AI wouldn't be a problem, it'd be a boon.
It really depends on the level of AI.
* Right now, we're at the point where AI can be used to suppress specific kinds of speech online or delude the public with propaganda. This is very dangerous because it allows an individual to get certain kinds of people elected and others demonized. You aren't killing anyone because fools will do it on your behalf.
* The creation of a general purpose AI will have massive global implications. A dictator could have an unflinching eye everywhere watching every second, thi
You don't want or need AI to suppress speech (Score:2)
And make no mistake it's a complete waste of time. Any modern government that turns to dictatorship is going to easily take full control over me
Re: (Score:2)
If you don't like it and you're an American you've got one or maybe two election cycles to do something about it. The authoritarians are trying to win and once they do they will end democracy and kill our Republic. We all know who they are I don't even need to name them it's the Republican Party
I'm as anti-republican as anyone around here, and more than most, but the idea that democrats are not authoritarians is a dumb one. The democrats aren't democratic either. If they were, they wouldn't have spoiled the primary for Sanders, he would have won the nominations, and like the polls said he could have beaten Trump.
Election cycles are irrelevant to the problem of authoritarianism in America, because both of the parties we're offered are authoritarian.
The only question is are we going to end the Republic
Yeah, The Republic that was designed explicitly to
Re: (Score:1)
A general purpose AI is really a collection of AI's that can decide what task and then apply the appropriate AI for the task. Whatever can be done with AI can be done (and has/is being done) wi
Re: (Score:2)
Instead of worrying about AI
I'm not worried about it, I know it's a more advanced version of the things we have now. If you think laws will stop it then you are naive.
AI is not really intelligent today ... (Score:2)
so it can never be intelligent in the future. QED.
Re: (Score:1)
Re:I see this same reasoning here on /. all the ti (Score:4, Insightful)
Another problem comparing AI to nukes and having a treaty to cover it :
Nukes, when you test them for real, you get all sorts of tell-tale signatures - from the mushroom clouds for open air testing, to the radiation increases, etc (even with underground testing) which are relatively easy to detect, especially with satellites.
How are you going to make sure that someone is following an AI Treaty (assuming they do sign such a treaty)? Have representatives stationed 24/7 at all supercomputer locations to check what sort of programs are running? You don't even know all the supercomputers in the world, unless they are revealed to you. And something like the Cerebras can do ML / AI stuff in one rack of computing that used to take a room of computers.
Now imagine a room of Cerebras type machines. And nobody else is aware of them. How are you going to monitor those? Or other future breakthrus in computing which nobody else is aware of?
It may be easy to sign such treaties. Figuring out if all parties are following the treaty is going to be next to impossible.
Re: (Score:2)
It may be easy to sign such treaties. Figuring out if all parties are following the treaty is going to be next to impossible.
I would argue that's makes AI more dangerous than nuclear weapons because their development cannot detected or stopped. This means nations will be sure develop AI in order to keep up with opposing nations.
Whoever creates a generalized AI will win. It may not be a military victory as it could easily be an economic victory, leaving other nations unable to compete. I could see a nation like China keeping such an invention secret in order to prevent any backlash from AI designed manufacturing processes that
Re: (Score:2)
You are thinking in terms of countries only. Think wider.
Especially since you don't really need a "super power" to work on it.
Maybe malware which uses your computing resources to work on ML stuff, so if they manage to infect alot of systems, they can get huge computing capabilities for "free" (latency may suck, and may need lots of redundancy but if you are lacking in resources, slower AI may still be better then no AI).
Or maybe some kid hooks up a bunch of devices together as a beowulf cluster and creates
Re: (Score:2)
You are thinking in terms of countries only. Think wider.
You are thinking in terms of fantasies. Think realistically.
The underlying the structure required for a generalized AI may be discovered on accident but the construction of it will be entirely intentional.
Re: (Score:2)
Or the fact that our Republic came very close to becoming a monarchy less than 2 years ago.
No. Even the people who were invading the capitol thought they were stopping tyranny. They were deluded of course, but very few Americans want monarchy. (For the ones who do want a monarchy, it's because they want to be the monarch).
I think you would be shocked (Score:1)
Re: (Score:2)
What were those Jan 6 protestors arrested for? Go back and look up the charges for the people arrested during the so called insurrection. You'll notice they were moronic trespassers. The only difference between them and the idiots banging on the doors during the Kavanaugh confirmation is that on Jan 6, Capitol police opened the doors.
40 defendants have been charged with conspiracy charges, and 275 have been charged with obstructing, influencing, or impeding official federal election proceedings.
That is what insurrection looks like in our court of law. You aren't going to see many if any rebellion, seditious conspiracy, or treason charges, because the bar to prove them is so high. This is similar to how Al Capone went to jail for tax evasion instead of all his other crimes: because that was easier to prove in court and came with sufficie
Re: (Score:2)
No. Even the people who were invading the capitol thought they were stopping tyranny.
You need to do more reading on how autocracies and fascists come to power (Monarchy is admittedly the wrong word though). You seem to think it is usually caused by mustache twirling super-villains. In reality, they are caused by normal moral people who think they are saving their nation from internal and/or external threats. Whether that be immigrants, Jews, or any number of made up or exaggerated threats.
It is actually quite impressive that the US was able to weather the storm of the populist uprising that
Re: (Score:2)
You seem to think it is usually caused by mustache twirling super-villains.
That is what Trump looks like to me.
Re: (Score:3)
A.I. is much less of an issue than, say, the American SW water crisis or climate change in general. Or our complete lack of anti-trust law enforcement. Or the fact that our Republic came very close to becoming a monarchy less than 2 years ago. Or...
AGI with the ability to improve itself (the "singularity") is a far greater risk than any of those things. None of those can cause the extinction of the human race, while a superintelligence with slightly misaligned goals might well erase use just because we complicate its work... or just because it needs the mass that we depend on for life (water, oxygen, etc.).
How likely is such a superintelligence? That's basically impossible to know. We don't know if intelligence can scale arbitrarily, or if there are
Do not forget the virologists (Score:1)
They have developed the amazing technology to manufacture novel viruses, like SARS-CoV-2 that causes Covid-19. Done in part using humanized mice! Incredible.
It will be decades before computers become truly intelligent, and the virologists have a good head start. If they succeed in wiping out humanity first there will be no general artificial intelligence.
I recon it is 70/30 that the AI will win, but the virologists are in with a good chance to stop it.
www.orginofcovid.org
Re: (Score:2)
We don't know if intelligence can scale arbitrarily, or if there are some upper limits -- potentially not even much higher than human level.
We know a lot more than you imply here.
- We know that the difference in capabilities and competence between a particularly dumb human being and a particularly intelligent human being is huge.
- We know that the difference in capabilities and competence between a particularly dumb human being and any of the primates is huge.
- We know that the difference in hardware/wetware in both previous cases is tiny and not even remotely close to being some kind of freakish exception case in physics.
Given the right insigh
Re: I see this same reasoning here on /. all the t (Score:2)
The idiotic idea of a singularity is NOT what Eric Schmidt is talking about as far as AI.
Expertise in one field ... (Score:4, Insightful)
Re: (Score:2)
And that's exactly why a journalist like Matthew Gault shouldn't be relied upon when they dismiss a risk that a wide variety of experts think has the potential to annihilate humanity [wikipedia.org].
Re: (Score:2)
Actual experts also know that AGI is nowhere near and may not happen, ever.
Re: (Score:2)
Expertise in one field does not carry over into other fields. But experts often think so. The narrower their field of knowledge the more likely they are to think so. Lazarus Long ( Robert A. Heinlein )
Very true. To be an expert in two fields, you need ... to become an expert in two fields. Sure, there are meta-skills like general scientific, engineering or analytical approaches, but they only speed up the process of becoming an expert in another field. There are a lot of things you still need to learn.
I do not think Eric Schmidt is an expert for either nukes or AI though.
nuclear community = AI community (Score:2)
So, he's asking for people/countries/orgs that don't have AI already to be banned from getting any? Huh. The "I Got Mine" attitude.
Comparing (Score:1)
giant apples to giant oranges.
Winning the holocaust (Score:2)
Except countries worked to prevent MAD and even declare a nuclear holocaust as "winnable". The world-wide damage to the planet means nuclear war doesn't decide who won, it decides who survived.
Drone carriers with thousands of armed drones (Score:2)
We're there.Moore's law not dead & AI explodin (Score:1)
Matthew Gault needs to get over himself. (Score:3)
I hate how these inane tech bloggers start with some legitimate quote from a legitimate source and then try to insert their own woke claptrap in order to sell their point of view.
Going from the existential danger of US - China relations leading to mutually assured destruction over AI gone wrong , straight to facial recognition systems not being able to accurately identify some faces over others, not only trivializes the actual subject of the article, but displays the vapid self centeredness and self-referentialism of journalism today.
Or perhaps, more fittingly, this Matthew Gault fellow has just demonstrated the same ignorance that Eric Schmidt is warning against - that banal politicizing blinds us from true mortal dangers.
Wake me up when... (Score:2)
A bloated hunk of intelligence simulation code incinerates a major city with all of its people, and leaves millions more blinded and burned in the surrounding miles, and does it without using anything nuclear (if it happens with nukes then it's just an example of the power of nukes in the hands of reckless humans who deployed the AI).
This is the problem with this sort of drivel. Eric Schmidt has never been particularly intelligent - more of a guy being in the right place at the right time. Idiots like schmi
lol true (Score:2)
Racist algorithms make racist robots, all AI carries the biases of its creators, and a chatbot trained on 4chan becomes vile...
Biases indeed, lol
There will, of course, be no AIs camping out outside of Supreme Court Justice's homes or anything like that ... or if they do, it will just be the product of sweet reason ...
Matthew Gault is wrong (Score:3)
No, a good AI is unpredictable, that is the whole point. Woke journalists should follow Pliny the Elder's advice, Sutor, ne ultra crepidam, and write about scandals in the influencer world or something like that.
Re: (Score:3)
Agreed. A powerful enough AI, and its not known what is going on inside. Its a blackbox. That's a downside because if it doesn't work as desired, might be difficult to find out why.
Re: Matthew Gault is wrong (Score:1)
No, no, no. This is a commonly false understanding of AI, propagated by people who read too much science fiction. While explainability is often a question, the fundamental reality is that AI does not go beyond itâ(TM)s designed intent. It is executing an objective function by definition, and that objective function is mathematically well understood.
The danger of AI that Eric Schmidt is talking about is that it is totally, completely based on the intent of its implementation. There is no magic super int
Meh (Score:2)
Even thing advertised as AI. I see nothing but a very dense IF tree.
Re: (Score:2)
GPT-3?
Did Schmidt just watch.. (Score:2)
Did Schmidt just watch Chappie (2015)?
I used to... (Score:2)
I used to respect Eric Schmidt.
Why is this hard to understand? (Score:1)