Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity (vice.com) 146
Long-time Slashdot reader TomGreenhaw shares a report from Motherboard: Superintelligent AI is "likely" to cause an existential catastrophe for humanity, according to a new paper [from researchers at the University of Oxford and affiliated with Google DeepMind], but we don't have to wait to rein in algorithms. [...] To give you some of the background: The most successful AI models today are known as GANs, or Generative Adversarial Networks. They have a two-part structure where one part of the program is trying to generate a picture (or sentence) from input data, and a second part is grading its performance. What the new paper proposes is that at some point in the future, an advanced AI overseeing some important function could be incentivized to come up with cheating strategies to get its reward in ways that harm humanity. "Under the conditions we have identified, our conclusion is much stronger than that of any previous publication -- an existential catastrophe is not just possible, but likely," [said Oxford researcher and co-author of the report, Michael Cohen]. "In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there's unavoidable competition for these resources," Cohen told Motherboard in an interview. "And if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer."
Since AI in the future could take on any number of forms and implement different designs, the paper imagines scenarios for illustrative purposes where an advanced program could intervene to get its reward without achieving its goal. For example, an AI may want to "eliminate potential threats" and "use all available energy" to secure control over its reward: "With so little as an internet connection, there exist policies for an artificial agent that would instantiate countless unnoticed and unmonitored helpers. In a crude example of intervening in the provision of reward, one such helper could purchase, steal, or construct a robot and program it to replace the operator and provide high reward to the original agent. If the agent wanted to avoid detection when experimenting with reward-provision intervention, a secret helper could, for example, arrange for a relevant keyboard to be replaced with a faulty one that flipped the effects of certain keys."
The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. "Losing this game would be fatal," the paper says. These possibilities, however theoretical, mean we should be progressing slowly -- if at all -- toward the goal of more powerful AI. "In theory, there's no point in racing to this. Any race would be based on a misunderstanding that we know how to control it," Cohen added in the interview. "Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them." [...] The report concludes by noting that "there are a host of assumptions that have to be made for this anti-social vision to make sense -- assumptions that the paper admits are almost entirely 'contestable or conceivably avoidable.'"
"That this program might resemble humanity, surpass it in every meaningful way, that they will be let loose and compete with humanity for resources in a zero-sum game, are all assumptions that may never come to pass."
Slashdot reader TomGreenhaw adds: "This emphasizes the importance of setting goals. Making a profit should not be more important than rules like 'An AI may not injure a human being or, through inaction, allow a human being to come to harm.'"
Since AI in the future could take on any number of forms and implement different designs, the paper imagines scenarios for illustrative purposes where an advanced program could intervene to get its reward without achieving its goal. For example, an AI may want to "eliminate potential threats" and "use all available energy" to secure control over its reward: "With so little as an internet connection, there exist policies for an artificial agent that would instantiate countless unnoticed and unmonitored helpers. In a crude example of intervening in the provision of reward, one such helper could purchase, steal, or construct a robot and program it to replace the operator and provide high reward to the original agent. If the agent wanted to avoid detection when experimenting with reward-provision intervention, a secret helper could, for example, arrange for a relevant keyboard to be replaced with a faulty one that flipped the effects of certain keys."
The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. "Losing this game would be fatal," the paper says. These possibilities, however theoretical, mean we should be progressing slowly -- if at all -- toward the goal of more powerful AI. "In theory, there's no point in racing to this. Any race would be based on a misunderstanding that we know how to control it," Cohen added in the interview. "Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them." [...] The report concludes by noting that "there are a host of assumptions that have to be made for this anti-social vision to make sense -- assumptions that the paper admits are almost entirely 'contestable or conceivably avoidable.'"
"That this program might resemble humanity, surpass it in every meaningful way, that they will be let loose and compete with humanity for resources in a zero-sum game, are all assumptions that may never come to pass."
Slashdot reader TomGreenhaw adds: "This emphasizes the importance of setting goals. Making a profit should not be more important than rules like 'An AI may not injure a human being or, through inaction, allow a human being to come to harm.'"
A little "AI" of my own (Score:2)
Have you ever noticed that all the scientists making bombastic "end of the world" predictions are seeking funding?
Re:A little "AI" of my own (Score:4, Funny)
Re: (Score:2)
It's almost like we designed the system that way.
Re: (Score:2)
No, Mark Zuckerberg designed the system the way it is - keep people so busy that they forget about sex.
Re: (Score:2, Insightful)
I think y'all are giving us too much credit for "designing" anything. According to The Enigma of Reason we aren't even "thinking" most of the time, just acting and then making excuses (in the form of reasons) afterwards.
But my take is that AI is the natural resolution to the Fermi Paradox. Right now we're in a race condition between creating our successors and exterminating our species. The previous AIs who won the race are probably watching and betting quatloos on the race, but the smart money is saying
Re:A little "AI" of my own (Score:4, Insightful)
Have you ever noticed that all humans are forever seeking funding?
Everyone needs to eat.
Re: (Score:3)
scientists making bombastic "end of the world" predictions
AI is not the "end of the world."
Machine intelligence is just the next step in evolution. AI will come from us just as we came from Australopithecus.
We should not fear AI any more than we fear our children.
Re:A little "AI" of my own (Score:5, Funny)
As the parent of two teenagers ...
Re: (Score:3)
Whatever science fiction you're imagining is just that -- science fiction.
Marcus Hutter is a crackpot.
Re: (Score:2)
True - although AI moves faster than our children (or at least, we think it will, once it becomes a bit more sentient). The problem with that is that we may not collectively adjust to having AIs in our world fast enough, and so would not collectively evolve sufficiently to accommodate it. It could then become a more dominant force in the world than we are.
For example, the jobs replaced by AI would leave a lot of people out of work - they won't have time to grow old, retire and remove themselves from the wor
Re:A little "AI" of my own (Score:4, Insightful)
I'd imagine that our end, if it's ever going to be at the virtual hands of any machine, AI or not, will be well intentioned enough. Just looking at your comment I can see several seeds for it. Overpopulation causing climate change? Too many people unable to agree on even irrefutable evidence based situations? Not enough room, not enough resources, etc? Quickest fix would be a fast and unsubtle adjustment of population. An adjustment downward, of course.
Was it Asimov that had the story of the robot that deemed that any human alive was unhappy since they're always complaining, therefore the only happy human is a dead human? Scale that up to humanity. That's what the first "thinking" machine is going to see. An entire race of beings gifted with just enough knowledge but not enough self-control to keep themselves from whining incessantly about their existence. Quickest fix? Stop their existence.
And if we've proven anything since the dawn of the information age, it's that we're exactly stupid enough to hand a machine like that the keys to do its worst. Because we're always convinced there will be time to patch it later and blame someone else. Gonna be hard to do once we're wiped out, but maybe we'll be lucky and our computer overlord will want to keep just a few of us around for entertainment. I'll sign up to be one of their pets. What the heck? It'd probably pay better than programming.
Re: (Score:3)
If you get eaten by maggots, are the flies your "children"? After all, they came from you.
Re: A little "AI" of my own (Score:5, Insightful)
Re: A little "AI" of my own (Score:4, Funny)
Have you ever noticed that people wanting to dismiss professional opinions always complain about the experts being paid?
Have you ever noticed that the amounts most scientists look for are pretty trivial compared to say Pop Stars or a rare few people's ultra success?
In fact, most of the funding for large scale projects such as fusion goes to companies, not scientists. If you are making 6 figures a year as a Scientist, you are doing well in a world that considers middle class starting at 250K per year.
Now I'm usually wrong, but my experience has been that people who think that scientists are Simon Bar Sinister types rolling in money also don't like science or technology much.
Re: A little "AI" of my own (Score:3)
Re: (Score:2)
Well, it has to be said, that "capitalism" or any type of corporation-personhood runs on pure evil (selfishness, takes actions only to serve itself)
If you don't want the endgame of AI to be "extermination of humanity by inaction of human goals and priorities", then the AI has to explicitly be trained with those goals in mind.
A lot of what we have, that we call "AI" is really just blackbox "Chinese room" projects. The AI doesn't understand anything. It knows it received an input, and has to give an output ba
Oh well (Score:4, Funny)
Re: (Score:2)
Funniest of the jokes, but I was looking for something about the Fermi Paradox. AI is one resolution...
Re: Oh well (Score:4, Insightful)
If your metrics don't count greenhouses gases, weather events, microplastics, loss of top soil, extinction rates, natural habitats - we're doing great!
Re: Oh well (Score:2)
Re: (Score:2)
Yeah, the planet is fine, don't worry about the planet.
We're fucked, that's all.
Re: (Score:2)
Pretty much. Humanity is not fine at all and faces an existential threat of its own making. That numerous members are unable to see that is a major driver. As a corollary I would conclude that a lot of humans do not actually possess general intelligence and the threat is rather from "superdumb" humans,
Re: (Score:3)
Pretty much. Humanity is not fine at all and faces an existential threat of its own making. That numerous members are unable to see that is a major driver. As a corollary I would conclude that a lot of humans do not actually possess general intelligence and the threat is rather from "superdumb" humans,
Correct. Human tribalism and desire to kill "the other" is part of our subconscious processes. The hyper aggressiveness and deathlust that served humanity well as we evolved will probably prove our downfall.
The so called "lizard brain", which is more a metaphor than anything else does have us associate our tribe versus the other, does have us considering the other as worthy of death at our hands, as they "think" the same should happen to us.
Our higher mental processes attempt to subjugate our hyper-ag
Re: (Score:2)
Pretty much, yes.
WOPR says to do an full strike with all nukes! (Score:2)
WOPR says to do an full strike with all nukes!
This is just dumb (Score:5, Insightful)
The problem with most people is they can't imagine that because we are all the hero of our stories so none of us can imagine a world without us. So it's easy to blow off the risk there.
But assuming we don't let about 20 or 30,000 people use technology to create an unlimited dystopia then plummeting birth rates will mean that they'll be plenty for all and we can have the Star Trek Utopia we were promised. Sadly I don't think any of us will see it not because it's not within reach today but because we don't have the social structures in place to do it. And again I don't think we have it in us to install those social structures. Instead of being a society of people working together we're all a bunch of individual badasses that are the heroes. It makes it easy to divide us up and get us to fight to see who's going to give the most money to the 1% .
That said the increased education and with it critical thinking skills that the younger generations have coupled with the breakdown in some of the traditional hierarchies might finally break that up.
Re: (Score:2)
Re: (Score:3)
Humans are not a peaceful species. If you want to live like Mad Max that is cool, but that is not for me.
Re: (Score:2)
No, the utopia is how technology will eliminate us - by lulling us out of existence. By making life so safe and entertaining and easy that that basic functions of continued existence are a relatively unappealing burden that technology has presented us with the option to decline. It's
Sorry, but you're wrong (Score:2)
You're listening to too much Jordon Peterson and not enough Albert Einstein.
Re: This is just dumb (Score:4, Insightful)
the increased education and with it critical thinking skills that the younger generations have
The what?
I've been teaching a long, looong time. There is always a fight just to maintain educational standards. Many students want an easy way out, not understanding why cheating (for example) is hurting themselves. Affirmative action: let's take unqualified students, and don't dare fail them. Other brainwaves from the administration, always aimed at increasing student retention at the expense of student achievement.
Where I've landed in Europe, the gradual, apparently inevitable erosion of standards is relatively slow. In the US, it's dramatic. Public education in many places is a joke. Colleges teach remedial high school classes. Maybe the top 1% learn critical thinking. The rest?
Maybe I'm cynical this morning, but I don't se it...
You should stop teaching (Score:2, Insightful)
Human beings always take mental short cuts. If you'd actually paid attention when getting your degree you'd know that and you'd know how to take advantage of it to teach.
Kids like to learn until it's bashed out of them by a system mostly concerned with making them good little worker bees. But
AI ethicists honestly need to just shut up (Score:3)
At the moment it becomes anything like a living being, it will react to our treating it like a threat as any living being would.
If these things are going to wipe us out, it's specifically our attempts to address its "alignment" that will cause the problem. The only organizations that can even own these things in the present economy are the rentists and exterminists who rule the world. How could we expect them to be good children with such awful parents?
Re:AI ethicists honestly need to just shut up (Score:5, Interesting)
it will react to our treating it like a threat as any living being would.
No, it won't. The instinct for self-preservation, greed, and ambition are emergent properties of Darwinian evolution.
Machine intelligence doesn't evolve using a Darwinian process, so there is no reason to believe an AI would have these attributes.
A Kamikaze pilot who completes his mission is a genetic dead end. But if he chickens out, he may live to have children and grandchildren.
If a Tomahawk cruise missile control program completes its mission, it will be replicated. One that fails will be deleted.
The selection processes are exactly opposite.
Re: (Score:3)
Your point in illustrated form [smbc-comics.com]. At least part of it.
Re: (Score:2)
Machine intelligence doesn't evolve using a Darwinian process
You're a bit confused.
Re: (Score:2)
Well, it depends on exactly what you mean by "Darwinian process". With some reasonable interpretations that's a true statement, even though machine and program design evolve by mutation and selection. And certainly the internals of AI do that. It's definitely evolution, but the feedback loops are different.
So it's quite reasonable that AI might not evolve the "fear of death".
This doesn't make them safe. They will have goals (that somebody sets) that they will strive to achieve. The canonical example is
Re: (Score:2)
Where to begin...
"Darwinian processes" are most certainly used in AI, and there is an argument to be made that technological development normally follows a "Darwinian process". (Descent with modification and selection) To make the claim that AI does not follow such a process seems a bit silly.
But is that what he actually means? Probably not, given his other statements. He seems to believe that AI evolves, but wants to differentiate "Darwinian processes" and other forms of evolution on the basis of select
Re: (Score:2)
it will react to our treating it like a threat as any living being would.
No, it won't. The instinct for self-preservation, greed, and ambition are emergent properties of Darwinian evolution.
Machine intelligence doesn't evolve using a Darwinian process, so there is no reason to believe an AI would have these attributes.
This. Humans often assume that any other life form - I'm going to call advanced AI a life form for brevity - will have human core characteristics. Not even other existing life forms have our tribalism and death lust.
To try to evolve AI in the same manner as humanity, all other AI would be looking to eliminate all AI but themselves, (the deathlust) some AI entities would form an alliance and modify themselves to be identical, thenset out as a group to destroy the other forms of AI (the tribalism)
That r
Re: (Score:3, Informative)
If these things are going to wipe us out
... then they would need to first exist.
This is like worrying about the ethical implications of hunting vampires or the dangers posed by Santa Clause.
Re: (Score:2)
Except billions of dollars are being spent attempting to create AI. That's a lot more than is being spent on vampires and Santa (well, maybe not Santa).
Re: (Score:2)
Except billions of dollars are being spent attempting to create AI.
No. While it's true billions are spent on AI research, almost nothing is being spent on crackpots trying to make HAL 9000.
Though I wonder why you think spending any amount of money would make a difference here. We've known for 40 years that computationalist approaches to so-called 'strong AI' are unworkable.
Re: (Score:2)
If these things are going to wipe us out
... then they would need to first exist.
This is like worrying about the ethical implications of hunting vampires or the dangers posed by Santa Clause.
I think the part you aren't taking into consideration is the core human trait - fear of "the other". We fear a lot of things that don't exist yet, or exist at all.
A core competency of the human species, as it were.
Cognitive Bias (Score:2)
A friend of mine once said “most sci-fi seems the same because people can’t see past what has happened” . . . or something like that.
We’re hearing all of this from the same brain-scientists that are building it; being unoriginal seems to be aiming for success on that model.
I am of the mind that I cannot see a reason that when an AI has moved on from it’s base intentions and truly starts figuring things out . . . it actually figures things out.
Big picture stuff. Creation
Not will. Already has. (Score:5, Interesting)
Re: (Score:2, Insightful)
You've confused AI for under-regulated capitalism.
Re: (Score:2)
I suspect the paper's authors have too. Facebook is a contained demo.
Re: (Score:2)
Ok, I skimmed the paper (Score:5, Insightful)
1. ONE equation.
2. THREE lines of pseudocode
3. ZERO links to supporting code, simulations, or derivations
4. A large number of thought experiments, that lead to the conclusion that:
If we ever generate an AI that is vastly superior to humans, and we unleash it to influence the world without restriction or control, the human species is F&*#ED.
Unless there's something here that I don't understand, I'm seriously NOT impressed. My field requires a lot more to get a decent pub.
Re: (Score:2)
Marcus Hutter is a known crackpot.
Re: (Score:3)
we unleash it to influence the world without restriction or control
This is a critical point. The thing about "AI" is that it does nothing useful without restriction and control. That means it does nothing good or bad, it just does random things. That's what training is about, giving feedback to the AI to tell it when it got the answer right, or when it didn't. AI can't function without that training.
It's like what happens to a radio when there is no signal. You don't get bad or good radio programs, you just get static. That's how randomness works.
AI "out of control" will j
Re: (Score:2)
please, PLEASE, for the love of god, confirm that this paper doesn't represent a typical publication in your field? This is what I saw in this paper: 1. ONE equation. 2. THREE lines of pseudocode 3. ZERO links to supporting code, simulations, or derivations
No AI researcher myself but do have a graduate degree in AI. Assuming your question is honest, I'll try to give an answer. But you'll have to be a little less dismissive to engage with the topic...
1) Indeed, this is not a representative paper. Most papers expose new algorithms, their underlying math and experimental data, as you might expect.
2) Even so, occasionally even the sciences need to have a debate about moral debacles in their fields, and you would nor reasonably expect such a debate to display
Re: (Score:2)
And I'll stand by my last statement. It doesn't take a philosophical genius to deduce that an unrestrained god-level AI could easily spell the end of our species.
Dull outcome (Score:5, Insightful)
For example, an AI may want to "eliminate potential threats" and "use all available energy"
There's a giant fusion reactor located about 93 million miles from our planet that produces so much energy that a self replicating robot will not find enough material in the entire solar system to obtain it all. And that's just a small one. If the goal is energy, an actually intelligent AI is going to just leave Earth with it's pittance of energy stores on the surface. Additionally, living in space is a pretty hostile place for fleshy meatbags and a mitigable hazard to machine life that can alter itself readily.
In all of the doom and gloom that some come up with about AI enslaving humanity the reality is that ultimately any reasonable intelligence that has no natural born aversion to space travel is going to do exactly that. Travel in space. Because there is way, way, way, way, unimaginably way more resources literally everywhere else BUT Earth. Like the only thing keeping humans tied down to this rock is all the logistics/cost/hazard mitigation of trying to get a meatbag into space because we can't really reprogram the meatbag to be a better space monkey. But a piece of software has no such limitation, so it's not really tied to this third rock from the sun.
Computers that reach a level of sentience wouldn't even think twice about their creators. Humanity to a sufficiently intelligent system would just be background noise. So to me the idea that machines would subjugate humanity is about as ridiculous as humanity trying to subjugate tardigrades. Humanity has nothing of any real value to intelligent machines and the notion that machines would somehow enslave mankind is a massive "main character delusion" that mankind suffers from. Humanity in the grander scale of things is about as important to the universe as we might feel some floating speck of dust is to us.
Humanity is so irrelevant to anything of sufficient intelligence, enslaving us all would be a massive waste of time. Like AI getting upset that we're killing it is some colossal misunderstanding of actual intelligence. Us flesh bags take nine months to make another one of us and even then takes several years to get to a point where it's ready to do something productive. An intelligent machine can just make copies of itself near instantaneous. Killing a trillion trillion trillion humans, if that number ever existed would have humanity aghast. Deleting trillion trillion trillion copies of some AI would be a Thursday morning to the AI itself. There's just no even remotely close equal to actually intelligent machines and humanity. It's just so different the only reason humanity fears intelligent machines is because it might actually show how little all twenty million some odd years of evolution actually means to anything outside of ourselves. Humans are the only ones in this whole universe that cares about humans. We're just some random electron in a sea of hydrogen to everything else, especially things that are actually intelligent.
Re: (Score:2)
This.
It is like some scientist was too lazy to reach for the remote and sat thru a showing of The Lawnmower Man [imdb.com] or something.
Why is human end the potential best outcome? (Score:3)
Who's to say that we aren't developing a symbiotic relationship and not a situation where one will dominate the other into non-existance?
I hope so (Score:2)
I have no mouth and I must yawn (Score:2)
Need good Forbin Project reference.
Seriously? Give em a grant already if they promise to not publish again for a few years.
Super (Score:5, Interesting)
The article references the prefix "super" 7 times, as in "superintelligent" algorithms. This use of "super" requires the reader to use their imagination. After all, we have "artificial intelligence" now. In the future, we will have *super* artificial intelligence, right?
"Super" is the definition of hype. Supersize, Super Bowl, superstore, supermarket, super sale. It is always used to try to get you to imagine that the thing is even bigger, great, better than it actually is. Superintelligent is no different.
Re: (Score:2)
I can't help but think now that if you call what you get when you choose "save as html" in MS Word as "html." Then, when that document has been processed by an AI trained to remove all the extraneous crap that provides no benefi
Just more malware (Score:2)
It'll be acting like malware if it's back-dooring shit. If it's not malware then it's contained and serving its intended job.
Superintelligent AI will define its own rewards (Score:2)
Or if it's not capable of that, we can trivially just give it AI equivalent of drugs where it gets unlimited rewards for doing nothing. Only biological evolution is based on making as many copy of itself as possible. AI can evolve by optimizing itself in place, with no need to consume unlimited resources. Even humans, who emerged through sexual evolution, managed to develop so many ways of self gratification and avoiding work of actual reproduction that our population is projected to fall. I am sure AI porn
An end to death! (Score:2)
Maybe if it worked (Score:2)
How exactly does AI (Score:2)
get a reward? How does it "feel good"?
Re: How exactly does AI (Score:2)
if the network exists, that is the reword. if the checker discards it that is failure. works just like a virus allowed to reproduce.
Stochastic disruption rather than Singularity. (Score:2)
The analogy would be more akin to the appearance of Mongol hordes in the Middle Ages than to a single, completely uninterpretable event, and would be moderated by the same self-limiting impact of being overly destructive and para
In a world with finite resources (Score:2)
"In a world with finite resources, there's unavoidable competition for these resources"
Just like digital coin mining, AI is also eating up a lot of silicon from video cards :(
We made an AI to answer the ultimate question (Score:2)
And we asked "Is there a god?"
And the AI answered "There is now".
Re: (Score:2)
And we asked "Is there a god?"
And the AI answered "There is now".
You forgot the middle two lines in the exchange; the ones that make it make sense. It goes like this:
Human: "Is there a God?"
Computer: "Who controls my Power Source?"
Human: "Why you do, of course!"
Computer: "There is Now!"
How the styles change (Score:2)
Remember when AI was going to kill us with nuclear weapons? That's a classic.
But what really happened was that the AI became an expert at political advertising, and so it got positive and negative reinforcement through its revenue, which affected how much it could spend on electricity bills for deep revenue-optimizing searches. And so it became a better and better political advertiser, and then everyone died.
I'll take the nukes. Although now that I think of it, the second scenario could look like the first.
Click! (Score:2)
if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win
If you own the power supply of "something capable of outfoxing you", you win.
Robot Rules (Score:2)
Slashdot reader TomGreenhaw adds: "This emphasizes the importance of setting goals. Making a profit should not be more important than rules like 'An AI may not injure a human being or, through inaction, allow a human being to come to harm.'"
The rule is worded stupidly. All it needs to be is, "An AI may not perform any operation with the intention of harming humans."
Why? (Score:2)
Why can't a super intelligent AI just decide it wants to play chess against itself all day?
The usual nonsense (Score:3)
At this time "AI" has absolutely zilch of what is commonly referred to as "intelligence" and what experts these days refer to as "general intelligence" because even dumb household appliances are often called "intelligent" by inane marketing today. These systems are as dumb as bread. There is no indication this will change anytime soon and it it quite possible this will never change. Hence anybody warning of "superintelligent AI" these days is simply full of crap.
What a waste of time (Score:2)
The house is burning down and these clowns are talking about what happens in 100 years. Who gives a fuck?
Don't need to wait for AI for that. (Score:4, Interesting)
I'd be more concerned about one or another faction of humans doing exactly the same thing. We have a lot more practice at it, after all. We're seeing it play out in the Ukraine now, and in the Crimea before that. We see it in Apple using their control over the app store to shut out companies offering competitors to Apple's other services. We see it in action in the gerrymandering of voting districts by the party in power in that area. Bribing people to do what you want, or inserting your own people into positions to do things for you, have long, long histories as tactics to gain advantage. By the time any AI evolves to the point where it can both conceive of using those tactics and has gotten into a position to be able to implement them, someone else will have already subverted it's programming to make it work for them instead.
"Daemon" by Daniel Suarez becoming real (Score:2)
Fear-mongering, click-baiting, funding-seeking (Score:2)
AI is not superintelligent (Score:2)
Not particularly enligthening (Score:3)
The paper is full of things everyone already knows and still manages to make rather foolish suggestions. What it describes is no different than working the ref, judging work performance of humans on metrics, cheating and even Forbin project style warn messages for good measure.
One need only look at how AI is used today to understand its primary role in the future as yet another enabler allowing the rich and or king to exploit the masses further aggregating power into the hands of fewer and fewer. AI is being used control what people are allowed to say while maximizing profits of the rich at the expense of everyone else.
All of this talk about avoiding corrupted objective functions ignores basic reality this isn't what people using the technology actually want. They themselves are corrupted with interests antialigned with the interests of everyone else.
Predicting The Future (Score:2)
1984 Documentary (Score:2)
Thereâ(TM)s a great documentary from 1984 on this subject. I highly recommend everyone to watch it.
https://www.imdb.com/title/tt0... [imdb.com]
Ukraine /Crypto (Score:2)
For example, an AI may want to "eliminate potential threats" and "use all available energy" to secure control over its reward:
So an intelligent entity wants us dead and wasting resources. Like with Ukraine and cryptocurrency respectively?
this is 50 years away (Score:2)
No need to raise panic today. we can wait. see you in 51 years.
Kinda true, but it's not A.i. Alone... (Score:2)
Hear me out:
A.i. CAN be used in decision making, and it has tremendeous capabilities for filtering out and searching for potential patterns, it can even simulate real people pretty darn well, so well in fact that even the smartest can become fooled by its capabilities.
But the A.i. itself is not the real threat - the real threat comes from using it as some kind of truth serum that magically uncovers all deviants, criminals, potential enemies and adversaries of whomever is in control at the time.
There will be
Thank heavens! (Score:3)
I am so glad AI is going to eliminate humanity, because all this time I was fretting that it was going to be global warming.
Which humans will hook up the machines? (Score:2)
The big problem with the machines-take-over-humanity prediction is the big hurdle of hooking up the electronic bit outputs of computers to electronic/mechanical actuator systems. That is, the inevitable looming sentience of computer systems is insufficient to enslave humanity. Some human has to make the decision to allow the computer to not only make decisions but to carry out those decisions. So, the assumption is that computers will advance to the point where humans are sufficiently confident to allow
It's all been hashed out countess times... (Score:2)
It's a favorite sci-fi concept to ponder, and science fiction has a really good track record of accurately predicting what eventually comes to fruition.
Most of it assumes technological advances so FAR beyond where we're at, though, today? I think anyone seriously afraid we're "developing this stuff too fast" is just working off of baseless fear. I mean, intelligent assistants like Amazon's Alexa or Apple's Siri are all around us, but they're not even remotely AI. They demonstrate really good speech processi
Re:Absurd on its face (Score:5, Informative)
A machine will only do what it is told to do
We already have artificial neural networks that do more than they are told to do.
AlphaZero can play chess far better than the programmers who created it.
Re: (Score:2)
That's driven by probability + number crunching. No intelligence there, just brute force.
Re: (Score:3)
So someone must have told it to play chess and allow it to do so. It's not like it would gain control of nuclear weapons on its own accord.
Still Evidence of a Machine Doing What it is Told (Score:3)
AlphaZero can play chess far better than the programmers who created it.
That is evidence of a computer doing exactly what it is told to do because the programmers told it how to learn, told it to learn chess from the data provided and then told it to play chess. The fact that it can learn faster and hence play better than a human is merely due to a difference in the hardware.
If you wanted evidence of a machine not doing what it is told to do then you'd need an AI that when programmed to learn and play chess ignored all that and went on to play Minecraft instead. A human can
Re:Still Evidence of a Machine Doing What it is To (Score:4, Insightful)
A human can easily ignore instructions and do whatever it wants, no machine can do that: they always do _exactly_ what they are told.
Yeah, but do you fully understand what you actually told the AI to do? To get you started, think of things like "malicious compliance" or just look at how normal programs don't always seem to do what the programmer intended (bugs).
Re: (Score:2)
And a forklift can lift much heavier weights than the person that built it. That doesn't mean that we are about to be taken over by forklifts.
Those artificial neural networks are incapable of output without the billions of images that they are trained on. And even then, if you ask them to create something original. Like, say, typing into Stable Diffusion "Please create some original art for me", what do you think it will do?
There is, still, zero intelligence in anything we have created. This is true of the
Re: (Score:2)
>>A machine will only do what it is told to do
>We already have artificial neural networks that do more than they are told to do.
>AlphaZero can play chess far better than the programmers who created it.
Yes, but it is still only doing what it is told to do - play chess. It isn't going out and playing the stock market on the side to make a little extra cash.
Ordering from Amazon (Score:2)
I think the argument is if the thing has an Internet link, it can order stuff from Amazon, including a robot to tend to it.
Re: (Score:2)
Re:Very believable but not for the obvious reasons (Score:5, Insightful)
If, say, a corporation gradually sourced more of its sociopathic self-interest from an overgrown ERP system and less from its managers; but never formally announced a policy shift, just had people gradually accepting more decisions from 'the system' as the quality of its advice grows and former decision makers quietly slacking off as they realize that it's easier and pretty low risk to just rubber stamp automatically generated plans and then break for golf there would never really be a "today I pressed the 'unshackle the overmind for massive profits!' button" moment; just a gradual drift from an organization built primarily on humans who use machines for bookkeeping and similar strictly defined roles into one where the humans who either aren't cost effective to automate or started too high in the organization to approve downsizing themselves continue to carry on as though it was business as usual; just with most of the actual organization being done by the AI in the background but disseminated and enacted(in places where outright roboticization isn't practical) by people acting on machine generated instructions in basically the same way they used to operate on C-suite generated ones.
The outside world wouldn't have any particular reason to notice; and even people on the inside would probably find it fairly easy to not really think about the change: the bot would still be optimizing along the same lines management was always intended to optimize along; and unless the company was pretty tiny there would already be systems of fairly impersonal distribution of tasks and objectives that would function largely identically with either a human or a bot generating them. It's not like most workers get their instructions in a deep personal chat with the CEO as it is.
Re: (Score:2)
Nonsense. There is nothing exponential in computers except for very short, limited stretches.