
Geoffrey Hinton Says There is 10-20% Chance AI Will Lead To Human Extinction in 30 Years (theguardian.com) 127
The British-Canadian computer scientist often touted as a "godfather" of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is "much faster" than expected. From a report: Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a "10 to 20" per cent chance that AI would lead to human extinction within the next three decades.
Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity. Asked on BBC Radio 4's Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: "Not really, 10 to 20 [per cent]."
Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity. Asked on BBC Radio 4's Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: "Not really, 10 to 20 [per cent]."
Sounds like an idiot to me (Score:5, Interesting)
The threat is not Skynet, it's universal economic disruption combining with cultural inertia causing societal collapse.
That means life will suck for the vast majority if they don't figure it out pretty quickly. It does not mean extinction of our species.
Re:Sounds like an idiot to me (Score:5, Insightful)
Afaict, your suggestion is well within the scope of what he's saying.
One thing you don't have to be a nobel prize winner to see: AI is made for impersonation, so it's pretty easy to see how AI fraud could scale up to some kind of financial disaster.. who knows? Stock market collapse? Encryption broken and sovereign funds drained? All seems fairly reasonable to happen.. I think you just said that.
Your comment isn't rocket surgery either, but I'll refrain from calling you an idiot.
Re: (Score:2)
None of the things you mentioned result in extinction level events. Misery, population reduction, etc, but not extinction.
Calling him an idiot was the polite option. Otherwise, he's a cynical amoral greedy bastard fear mongering for the attention and profit that brings.
Re: (Score:3)
None of the things you mentioned result in extinction level events. Misery, population reduction, etc, but not extinction.
The 95% who died in the crisis may not fully appreciate the semantic games. In spirit, since they are dead.
If we can call climate change existential then we can call rouge AI existential.
Re: (Score:2)
Re: (Score:2)
AI doesn't commit fraud, humans do.
"All seems fairly reasonable to happen."
If humans do it. And if that triggers human extinction, if would be humans that caused it, not AI.
People need to grow up, AI is not a bogeyman, it's a computer program. When is the lizard brain going to stop being in charge?
The human race will not go quietly into the night, your Cybertruck isn't going to save you from billions of pitchforks.
Computers need manual override (Score:2)
People need to grow up, AI is not a bogeyman, it's a computer program.
And computer programs never harm people?
...
Both accidents saw uncontrolled drops in the aircraft's nose in the moments before the planes crashed, which investigators have blamed on the model's anti-stall flight system, the Maneuvering Characteristics Augmentation System, or MCAS."
"A 2019 Ethiopian Airlines plane crash which killed 157 people was caused by a flight software failure as suspected, the country's transport minister said Friday citing the investigators' final report.
https://www.barrons.com/ [barrons.com]
Re: (Score:3)
Why the ad hominem attack?
Insults are not inherently Ad Hominem, stop using phrases you don't understand. Ad Hominem is when you assert that something is [un]true because insult.* Which is why if I called your claim stupid, it would also not be Ad Hominem.
Your comment isn't rocket surgery either, but I'll refrain from calling you an idiot.
Good thing, since you've already demonstrated that you don't know what you're talking about.
* This is not strictly true either, but in this context, it is the definition that matters. This is actually abusive ad hominem, which is just one of several kinds of ad hominem fallacy.
Re: (Score:2)
The OP did not call the claim stupid. He called the man stupid. To be absolutely precise, he said "sounds like an idiot to me."
You could argue that statement was purely gratuitous and had nothing to do with the OPs argument, but that's not really how it would normally be interpreted. Especially since the OP's post didn't actually contain an argument but rather just the quoted statement in the subject line plus their own pet theory.
Re: (Score:2)
The OP did not call the claim stupid. He called the man stupid. To be absolutely precise, he said "sounds like an idiot to me."
That is irrelevant to the argument, since it's not Ad Hominem either way.
Re: (Score:2)
I'll look up ad hominem and make sure to use it correctly
The American Heritage® Dictionary of the English Language, 5th Edition
ad hominem
adjective
Attacking a person's character or motivations rather than a position or argument.
Looks like I used it correctly after all...
in case you don't understand, I
Re: Sounds like an idiot to me (Score:3, Informative)
Re: (Score:2)
We have AI right now. By definition. (The "A" in "AI" stands for "artificial," which means "fake." So it doesn't have to actually be intelligent in order to qualify as AI).
We sure don't have intelligent machines now. We have not achieved "synthetic intelligence." I would also say that we have not achieved "AGI" but that term just got re-defined in a money-focused way that says nothing about intelligence, so it's now worthless.
Also I don't know if we have a 10% chance of having intelligent machines in 3
Re: (Score:3)
>"The "A" in "AI" stands for "artificial," which means "fake." So it doesn't have to actually be intelligent in order to qualify as AI"
I think that depends on definitions.
"Artificial" generally doesn't mean "fake" https://ahdictionary.com/word/... [ahdictionary.com]
a. Made by humans, especially in imitation of something natural
b. Not arising from natural or necessary causes
So AI = "Intelligence made by humans" or "Intelligence not arising from natural causes". It still requires there to be intelligence.
https://ahdictionar [ahdictionary.com]
Re: (Score:2)
Right there in your own definition:
a. Made by humans, especially in imitation of something natural.
Do I need to quote even more definitions to point out that something "imitates" something when it is not that thing (as in, you know, imitating intelligence when something is not intelligent?)
And since we are quoting dictionaries, how about Merriam-Webster?
1: the capability of computer systems or algorithms to imitate intelligent human behavior
2: a branch of computer science dealing with the simulation of inte
Re: (Score:2)
>"While I do agree with your statement that this "depends on definitions," it so happens that the English language does not have an ultimate authority on what definitions are."
Words also often impart different flavors. For example, if in reference to some in pain, I said "This pill is a fake opioid" that would probably relay it was a placebo and not effective for its intended purpose. But if I said "This pill is an artificial opioid", one would think it is effective but just not natural.
>"People who
Re: (Score:2)
a. Made by humans, especially in imitation of something natural.
A kid pretending to be a dog could fit this definition...
Re: (Score:2)
Re: (Score:2)
> it's universal economic disruption combining with cultural inertia causing societal collapse.
This.
We're in a perfect storm, because at the same time, climate breakdown or rather rapid climate change is also going to lead to societal collapse.
Multiple events happening almost simultaneously could in fact result in civilization collapse, rather than societal, although the two are somewhat intertwined.
Any thinking person born within the last century who has had the benefit of education has known, since bei
It's the irony that can be deadly; solutions (Score:5, Interesting)
By me from 2010: https://pdfernhout.net/recogni... [pdfernhout.net] ... Still, we must accept that there is nothing wrong with wanting some security. The issue is how we go about it in a non-ironic way that works for everyone."
"The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream. We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working").
Some more solutions collected by me also circa 2010:
https://pdfernhout.net/beyond-... [pdfernhout.net]
"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."
Self-driving trucks not transporting food (Score:2)
The threat is not Skynet, it's universal economic disruption
For example self-driving trucks not transporting food. Critical services need human oversight, no less than military weapons.
Re: (Score:3)
I think he is overstating the case, because people often conflate "humanity" with "civilization", as if you can't have humanity without civilization. We absolutely can have humanity without civilization, it's the ground state to which our species has returned time and time again.
Every past civilization has collapsed, and if you had to generalize to cover every such collapse, it'd go like this: civilizations collapse when they experience changes they can't adapt to. In some cases that handwaving is carryin
Re: (Score:3)
The threat is not Skynet, it's universal economic disruption combining with cultural inertia causing societal collapse.
IMO, the biggest threat is neither of those things. The threat is a vastly more intelligent species living on our planet with us, a species that doesn't share our need for breathable air or clean water, has goals of its own and doesn't care one way or the other about us, except to the degree we get in its way (which isn't much; we won't be capable of seriously interfering with it).
Look at the extinction rate we cause in other species... and we actually do need an environment that is compatible with them,
Re: (Score:2)
The threat is not Skynet, it's universal economic disruption combining with cultural inertia causing societal collapse.
That's probably true. But I would argue that both the cultural inertia and the economic fuckery you mentioned are being accelerated by so-called AI.
It does not mean extinction of our species.
The climatic impact of all that power used to run AI server farms could bring us close to the brink. It may not result in our extinction, but I wouldn't be placing any bets on the continued viability of modern civilization.
Re: (Score:2, Informative)
Sounds like scientific consensus of ice age exit (Score:2)
Citations please.
I think he is expanding on the scientific consensus that we are in a geological age where the earth is coming out of an ice age. Ie a long term warming trend. To what degree humanity has accelerated this depends on the model used.
Re: (Score:1)
Citations please.
I think he is expanding on the scientific consensus that we are in a geological age where the earth is coming out of an ice age. Ie a long term warming trend. To what degree humanity has accelerated this depends on the model used.
It's not coming out of an ice age, it's in an ice age. If there's ice at the poles, it's an ice age. We came out of glacial maximum 12,000 years ago, but temperatures have been stable with s very, very slight downward trend for the last 8000 years, apart from the last 200. There's no continuous upward trend which current warming is a part of.
Re: (Score:3)
"We have the power to shape Mother Nature today"
Citations please.
By cutting down forests, especially the Amazon forest, we alter weather patterns and create desertification. By paving over open fields with blacktop we create heat islands which also affect weather patterns, including rainfall.
Re: (Score:2)
Re: (Score:1, Interesting)
Re: (Score:3, Insightful)
This is a common (and stupid) argument I see a lot how the Earth is changing and has been hotter in the past.
The EARTH is going to be find no matter how much we cause it to heat up. Humanity may not be able to inhabit large portions of it due to the climate. It's something that might have happened over thousands of years happening in decades thanks to our actions.
Science and engineering can beat Malthus (Score:1)
This is a common (and stupid) argument I see a lot how the Earth is changing and has been hotter in the past.
Well it is the predominant scientific consensus? If you deny this aren't you just the flip side of the climate denier's coin?
The EARTH is going to be find no matter how much we cause it to heat up. Humanity may not be able to inhabit large portions of it due to the climate. It's something that might have happened over thousands of years happening in decades thanks to our actions.
That conclusion varies with the prediction model used.
And it ignores the possibility of human science and engineering intervening on the cooling side. Science and engineering have avoided many Malthusian crisis that predicted the doom of humanity.
Re: (Score:2)
This is a common (and stupid) argument I see a lot how the Earth is changing and has been hotter in the past.
Well it is the predominant scientific consensus? If you deny this aren't you just the flip side of the climate denier's coin?
Obviously we know with certainty that the Earth was once much hotter than it was now. Humans didn't live on the planet then... and couldn't have survived on most of it then. Maybe at the poles. Going back to that would be disastrous for humanity, likely an extinction event, definitely a civilization-ending event. This is a situation we do not want.
When people say "save the planet" they don't mean it literally (well, some idiots do), what they mean is "keep the planet comfortable for humans". We're on
Re: (Score:2)
Obviously we know with certainty that the Earth was once much hotter than it was now. Humans didn't live on the planet then... and couldn't have survived on most of it then. Maybe at the poles. Going back to that would be disastrous for humanity, likely an extinction event, definitely a civilization-ending event. This is a situation we do not want.
A 1.5C increase may very well change weather patterns and habital zones, it is something we do not want, bug extinction level - not quite.
The first intervention we should make is dramatically reducing greenhouse gas emissions, as rapidly as possible.
Which would involve stop giving waivers in climate agreements/accords to the greatest polluters on the planet. It's hard to take governments seriously when they basically treat it as political PR. Shifting "our" industrial pollution overseas to greenwash our own economies.
... Of course, it's always possible that we'll come up with something to, say, inexpensively recapture and sequester atmospheric CO2 ... it would be foolish to bet on that happening.
It would be foolish not to research climate technologies too.
Re: (Score:2)
Well it is the predominant scientific consensus? If you deny this aren't you just the flip side of the climate denier's coin?
Yes it is, but you missed the point. No one's denying that the Earth has been hotter, they're denying that humanity will be fine if it happens
Re: (Score:2)
Well it is the predominant scientific consensus? If you deny this aren't you just the flip side of the climate denier's coin?
Yes it is, but you missed the point. No one's denying that the Earth has been hotter, they're denying that humanity will be fine if it happens
Not quite, they are denying that humanity will face extinction if it happens. No one is arguing that local agriculture is not going to be disrupted in some places and that major changes will be necessary.
Re: (Score:2)
It's not even extinction we're worried about. If the land billions of people live on is uninhabitable, that's bad too. Just the economic cost is going to dwarf the costs of reducing carbon emissions
Re: (Score:2)
Humanity may not be able to inhabit large portions of it due to the climate.
You mean like places such as Antarctica, the Sahara and a lot of the Outback if you don't have AC? Meanwhile, places further north and south become inhabitable. We also see other planets with climate belts (Jupiter an Saturn have such belts) like the earth has, though earth's aren't visible. This may bring about new climate belts we are simply not familiar with today.
Re: (Score:2)
Yes, places like those. We can't just move billions of people across the planet for a reasonable cost
The USA moves to Greenland (Score:2)
Not to mention climate change which might take a little longer than 30 years while it is increasingly becoming irreversible.
The USA moves to Greenland, problem solved. :-)
Not very likely (Score:3)
Because of the exorbitant cost of the energy to use AI, it's much more likely we'll simply be unable to keep pace. A lot of people will die when we run out of cheap energy, but it's not an extinction event - just a simplification.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
AI does not take exorbitant amounts of energy to use, it takes exorbitant amounts of energy to *train*. The results of many models can run on smartphones. The issue that makes it unlikely is the incredibly amount of energy required to increase training set data, and limits in the size of the models.
Just for an example, I can train an AI to do a face swap with a custom model for me. It will take my computer well over an entire weekend crunching at full tilt (yes I've done it), and the result is applied in re
Re: (Score:2)
This is a distinction without a difference, because without training, you have no model and therefor can't use it. Nor will these companies stop training new models.
Re: (Score:2)
Because of the exorbitant cost of the energy to use AI, it's much more likely we'll simply be unable to keep pace. A lot of people will die when we run out of cheap energy, but it's not an extinction event - just a simplification.
Nah. AI training will get more efficient. We know it can be done on a very small energy budget; humans do it on a few hundred kilocalories per day, and that on the pretty inferior hardware evolution ginned up for us.
Economic effects Terminator. Read (Score:5, Interesting)
People read headlines like this and immediately assume Hinton is talking about a Terminator-style scenario. He isn't.
He is talking about *all possible implications* of the rapid advance of AI.
The most likely scenario we are going to find ourselves in over the next 10 years is not a Terminator style scenario, it is a scenario such as depicted in Marshall Brain's short story "Manna", where the outcome of development of a sufficiently powerful AI (which, by the way, is not even as capable as today's LLMs in the book) results in an *economic collapse* in countries where governments did not sufficiently plan for the outcome.
https://marshallbrain.com/mann... [marshallbrain.com]
Re:Economic effects Terminator. Read (Score:5, Interesting)
Economic collapse will not cause extinction. It will cause loss of advanced technology and massive, but not complete, loss of life.
The Terminator scenario is the only AI future that results in actual extinction. The other things that could kill us all are either us (not needing AI's help), or Nature. Giant meteor, supernova, expanding Sun, etc.
Re: (Score:3, Insightful)
Economic collapse certainly could lead to extinction. If you don't think that economic collapse would come with massive social unrest and war, then you don't seem to know a thing about humanity.
Re: (Score:2)
If you live in a fantasy land where starvation and war can wipe out all of humanity, you're probably not rational enough to convince otherwise.
Re: (Score:1)
Re: (Score:2)
AI isn't necessarily an "advancement". Any more than Facebook is.
"We are in part and on average richer with more opportunities than at any time in the past"
Who is "we"? large parts of the population are not. Growth in wealth is extremely out of balance.
"These points, while factually true, unfortunately do not make for a click-baity article though."
And don't contribute to the conversation either. The Great Depression was a pretty significant event, but nowhere near 90%. It's difficult to have any respect
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
How about an AI-designed drug (for "longevity"?) that kills everyone after 10 years.
A faulty reactor design which fails in the worst possible way.
Massive crop failures due to AI-designed pesticides.
There are many scenarios.
you forgot the RIP (Score:3)
Re: (Score:2)
Wow, I had no idea he died. I didn't follow him to any degree.
The story you linked is very suspicious. I hope that someone is held accountable for this.
Re: (Score:1)
Well, we have one such event in the future: The collapse of Microsoft. Whether it has a connection to AI will remain to be seen. But that is on a level that some countries will descent into chaos while others will not be overly bothered (those that understand you should never be critically dependent on a single supplier you cannot replace for anything). But even that will not cause human extinction.
Re: (Score:2)
He's talking about a number of scenarios that include but are not limited to Skynet, where humans cease to be the dominant faction on the planet in favor of AI:
Re: (Score:2)
The fact is that human society is a complex system: It's not possible to extrapolate to the whole from local trends in one single area of technology
Humans are the ones training it (Score:2)
extinction? no. (Score:2)
Unforseen consequences? A nearly 100% chance.
We are too clever to be wiped out. (Score:2)
Re: (Score:1)
That happens to be nonsense. Larger and smaller populations of humans got wiped out in human history. 3 competing homo-something got wiped out. As much as everything is connected today and with a basically complete lack of isolated, self-sufficient communities, extinction is a real threat. And remember, about 40k of interbreeding humans is about the current minimum or it is over just 1000 years later or so. That number used to be much lower.
Bullshit (Score:2)
Seriously. Sounds to me this person (yes, I know who he is and what he has done) does not understand AI or rather the "A no-I" we have today and will be having for the foreseeable future.
Obviously, Trump or Putin could get so enraged enough because of, say, copilot, that they trigger the launches and make that extinction happen. But that is about the only available mechanisms. Not even all the efforts we currently do to make climate change a real ELE catastrophe can be increased to that level.
It won't be a movie scenario (Score:2)
That's nothing! (Score:1)
How can I place the bet? (Score:2)
And profit from those odds? Is there a robot bookie who will take the bet and pay off?
Trying to quantify Hinton's claim (Score:1)
There have been five major extinction events in Earth's history and one currently that's happening (caused by humans). Then, for the sake of quantifying Hinton's claim, let's consider the asteroid that caused the extinction of the dinosaurs 66 million years ago. It was caused by an asteroid that generated 10^23 joules of energy. This is approximately 1 billion times the energy that was released during the nuclear bombing of Hiroshima.
But, in order to create discrete bounds, let's assume that this is a scale
Re: (Score:1)
I could see it happening (Score:2)
As their economies implode they'll do what all failing empires do: put lunatics in charge ala Nero burning Rome.
That'll cause those empires to look for military expansion [youtube.com] because they'll need to rob other nations to fill their dwindling coffers, again just like Rome did / tried to do.
The difference is Rome and it's surrounding people didn't have nukes. We do. And I co
Re: (Score:2)
You seem to be someone who actually take into account how far we've come in our ability to carelessly cause truly catastrophic outcomes for humanity with little oversight or consideration. How long will it be before somebody working in a garage lab with commercially available equipment designs a virus that's only supposed to wipe out people with the wrong skin colour, only to find out we really are NOT all that different? How long before some billionaire tech bro unilaterally decides the latest iteration
What can we do? (Score:2)
What can we do to raise those odds?
Re: What can we do? (Score:1)
10-20% of the time (Score:2)
I will be dead in less than that. (Score:2)
Chicken Little (Score:1)
Batting 1000 (Score:2)
On the historical scoreboard of people predicting end of the world scenarios, humans have
managed to be wrong in every single prediction since the dawn of our existence.
Every - single - prediction.
Read into that what you will then assign what level of anxiety you think we should have when
it comes to the doomsayers and their opinions on Artificial Intelligence.
30 years? (Score:1)
Nah, we're dead already. We just don't know it yet (Score:2)
We're all in the biggest game of musical chairs in history. Everybody better get ready for when the music pauses the next time. Nobody else cares if you live or die, especially not our government. We're just cogs to get other people to pay them. Anything else and they couldn't care less.
Re: (Score:2)
No evidence no data just guessing (Score:2)
What drives me crazy about the x-risk crowd everyone just talks out their ass. They have no objective basis, data or evidence to inform what they are saying and they openly admit it.
Pffft! (Score:2)
There is no way what we laughingly call "AI" could possibly wipe out humans unless we were stupid enough to rely on AI to control oh yeah, I get it now.
Finally .. (Score:1)
Extinction extinction is not happening (Score:2)
We have a massive variation in mindsets and ideas for this very reason, the ol and good "software side evolution" where all ideas are tried, and some succeed regardless of the situation.
Even if we find ourselves on a bizarre situation where you need people that want to have sex with trains, we have people for it.
And in a case of a complete technological nightmare collapse, we have luddites, we have the amish, we have people living in complete isolation from everything..
hinton (Score:2)
Hinton is great. On this topic, he is full of shit.
Re: (Score:2)
Although what I do expect AI to do is A) crash several investments leading to a recession somewhere between a mild to a major one, given the money pouring into AI companies, and B) accelerate climate change given the power needs of these things.
And given the apparent utility of AI, all that power and climate change and investment is of questionable value. So far, at least for me, ChatGPT gave me a rough sketch of a vacation I took (that I had to edit heavily) and plan my D&D game (which
Re: (Score:2)
Agreed.
Although what I do expect AI to do is A) crash several investments leading to a recession somewhere between a mild to a major one, given the money pouring into AI companies, and B) accelerate climate change given the power needs of these things.
(A) is sure to happen. We are now 5 (!) years in and there are still no good applications of LLMs besides somewhat better search and "better crap" (which is still crap). On (B), I think it depends. If the hype collapses next year (a very real possibility), that energy consumption may mostly go away before it has a real impact. If the cretins and assholes manage to keep the hype going for another 5 years, that would be different.
And given the apparent utility of AI, all that power and climate change and investment is of questionable value. So far, at least for me, ChatGPT gave me a rough sketch of a vacation I took (that I had to edit heavily) and plan my D&D game (which was honestly kind of awesome). Everything else I have tried to use AI for, and I've tried thus far CoPilot and ChatGPT, it ends up being more of a pain and slower to work with to find what I need.
So what have I seen it doing?
(1) Search. Basically a worse wikipedia page or res
Re: (Score:2)
(3) Coding? No. All it does here is make it harder for my students to learn coding because they do not invest enough time in the simple things to ever be capable of doing the more complex things that AI cannot do for them.
There is a great short story by Isaac Asimov called The Feeling of Power [wikipedia.org]. The premise is that the future society is so computer controlled we forgot how to do even basic mathematics. That is until people reverse engineered one of the computers that run society to learn basic math, and from there inspired the military establishment to move towards manned systems and missiles to get away from computerized systems.
It's "great' and also "frightening" when you see the society he envisioned in this story, a
Re: (Score:2)
Interesting. I did not know that one. Thanks!
Well, the problem of over-reliance on tools without understanding is not a new one and there are numerous catastrophes in engineering history resulting from it.
Re: (Score:3)
Sure the truckers can still deliver, airplanes individually can fly - but getting things from a to b so no one starves or freezes to death would be a challenge. Just knowing who needs what was only on those computers.
It isn't really about Ai but just
Re: (Score:2)
'Well just do it like the ai did it.'
'I don't know what or how it did anything, it was trained years ago and he retired.'
Re: (Score:1)
Once it is the standard requirement for everything to 'just work'
That is already the requirement at any company and computers/websites fail more than ever!
Re: (Score:2)
It's a fear tactic. It's grift.
Now, lend this insight to all the other grifts you have supported for years. You are a lead cheerleader for the party of fear-mongering.
Re: (Score:2)
1. AI will, due to its power and value, inevitably be developed to whatever upper limit there is for it (convergent technological evolution guarantees that). That upper limit is much, much higher than what humans can ever hope to achieve (physics and biology guarantee that).
2. AI will grow to have all power, simply because it will be better than humans at doing the things that matter to society.
3. The vast majority of humans will become a large net cost to society and at some point, AI decides to stop provi
Too vague (Score:1)
AI decides to stop providing for the cost.
That's the thing. It doesn't matter if AI decides that. If it does, we control the plugs... in terms of connectivity and power.
There is no amount of intelligence large enough it can bypass the human will to survive, and certainly no amount of intelligence enough to put absolutely all humans under the thumb of said AI without many escaping either from paranoia or sheer obstinance.
Again that "super intelligent AI" all hand-ravingly vague without describing HOW exact
Re: (Score:2)
If it does, we control the plugs
No. This is already beyond the point where we have voluntarily ceded control. You can't remove or ignore step 2 and then pretend like step 3 is impossible.
absolutely all humans
This is grasping at straws. If 80% of humans are fucked, then there is effectively societal collapse for humans. That there will be some people trying to survive like hunter-gatherers isn't very comforting. It is also still mass death.
HOW exactly this-god like intelligence can actually use physical means to bring about our ends.
You clearly don't understand what it means to wield absolute power. Absolute power means control of all weapons systems an