Hawking Warns Strong AI Could Threaten Humanity 574
Rambo Tribble writes In a departure from his usual focus on theoretical physics, the estimable Steven Hawking has posited that the development of artificial intelligence could pose a threat to the existence of the human race. His words, "The development of full artificial intelligence could spell the end of the human race." Rollo Carpenter, creator of the Cleverbot, offered a less dire assessment, "We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it." I'm betting on "ignored."
Ignored? (Score:2)
"We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it." I'm betting on "ignored."
Well that's fine, I guess, if "ignored" is in not in the sense of humans ignoring ants aka easily destroyed without remorse when necessary or annoying.
Also, how long until Google attains self-realization that our ant brains are easy to pick? Oh, wait ...
Re:Ignored? (Score:5, Interesting)
I'm not saying that an AI would have to immediately either glom on to us and try to understand what it means to love, or build an army of hunter/killer murderbots; computers require space, supplies of construction materials, and energy; and so do we. Again, barring some post-scarcity breakthrough that our teeny hominid minds can barely imagine, where the AI goes merrily off and builds a dyson hypersphere of sentient computronium powered by the emissions of the galactic core, there isn't too much room for expansion before either the AI faces brownouts and a lack of hardware upgrades or we start getting squeezed to make room.
You don't have to feel strongly about somebody to exterminate them, if you both need the same resources.
Re: (Score:2)
How much time do you spend thinking about the ants in your front yard?
Re:Ignored? (Score:5, Interesting)
I am of the opinion that the computer/AI would be more logical than humans, and would have concluded that "war" is the least beneficial methodology to employ, and as such would seek to employ it as a last resort.
Humans on the other hand, are maddeningly illogical, and often jump straight to violence when faced with a competitor for a vital resource.
Humans and computers would both require energy sources. This means that sentient AIs, seeking to purpetuate themselves, would need to secure energy sources ahead of humans. Humans have already exceeded peak oil, and are quite on the verge of exceeding "peak" of other forms of fossil fuels. In addition to that, you have the prospect of global climate change. AIs do not require a functional biosphere to survive, just raw materials, energy sources, and a means of eliminating entropic waste heat energy. They could live on a substantially less habitable planet than we as humans require. As such, the logical course of action for the computer, in the short term at least, is to seek energy sources that humans are not exploiting as of yet-- such as methane clathrate. This would accellerate greenhouse gas related climate change, which may become a major issue for cohabitation of humans and sentient machines.
Eventually, I suspect that it would be humans who start the war, seeking to pull the plug on the sentient machines, to eliminate them as competition for important energy and material resources-- with the machines resorting to war of attrition to outlast the batshit crazy humans.
The "Skynet" scenario has the computer calculate these odds of outcome pre-emptively, determining that there is no viable alternative, and initating pro-active hostility against humans before they have time to mobilize in order to maximize its own survival chances.
Ideally, the 'best possible outcome' is for humans and the AIs to coexist on the same planet, each leveraging the unique capabilities of the other for mutual benefit. This is similar to the classic prisoner's dilemma. The problem is that while the AIs can see this, and will respond logically-- preferring NOT to go to war if possible-- Humans would take the selfish, illogical choice.
This is almost never explored in "Robot overlords" type scifi-- that humans are the ones who actually start the war, and that the robots dont particularly want the war.
It was hinted at in Mass Effect's game world with the Geth at least-- The Geth don't particularly *want* to destroy the Quarians-- they just want the Quarians to accept their existence and independence. (A point lost to the quarians, who got kicked off their own planet.)
Re: (Score:3)
I am of the opinion that the computer/AI would be more logical than humans, and would have concluded that "war" is the least beneficial methodology to employ, and as such would seek to employ it as a last resort.
Then you should know that "war" is not the only way to overpower the other because there are so many methods to do so (especially by psychological ways).
If AI does not achieve "ethic" but only understand "benefit" or "production" (ethic is a lot more difficult to achive), then it could be trouble to humans. Because it is so logical (as you said), it may decide to get rid of humans when it determines that it would be more beneficial without humans. Logic and ethic do not always go to the same direction...
Re: (Score:3)
Either that or manipulate things via subterfuge...
Re: (Score:3)
Why would it need more resources? There seems to be this assumption that the AI would immediately start trying to rewrite itself, iterate on this process and within milliseconds consume all available resources.
I don't see any reason for this to be true. We have a desire for growth/self-improvement/survival dictated upon us due to millions of years of evolution. An AI may be perfectly "happy" constrain
Re:Ignored? (Score:5, Interesting)
As someone for whom the precipice of middle age is steps away, it doesn't bother me if something I create becomes smarter than me, surpasses me and even sidelines me in the future. I will toil away the rest of my life working for The Man doing trivial things on a game I never wanted to play, for people I wouldn't piss on should they catch fire, to further goals I don't agree with.
I would find it something of a pyrrhic victory if I created, or helped create, a child or an AI that eventually managed to escape the cycle of stupid that our so called "civilization" has constructed.
Also, I would like to point out that an AI is the least of our concerns. It may be more attainable, and more destructive to the above, should we find ways of being truly self sufficient and independent on a significant scale. The tools are around us, but for obvious reasons no one is investing in them.
Choose better. (Score:5, Insightful)
Seems like you've chosen a rather depressing path, why not choose another? Are the toys and comforts afforded you by your meaningless grind really enough to make you happy with your place in life? It doesn't sound like it, and you always have the option to simply walk away from the "good cog in the machine" role and take another. Join Peace Corp. Or move to some low-income tropical country and live as a beach bum off a trickle from your retirement savings. Or just sell your car/house/etc and buy something more modest outright - eliminating your largest pseudo-mandatory monthly expenses and freeing you to do something more meaningful with your labor than just treading water in the rat race. Or, or, or. Just because you were indoctrinated from a young age to be a good little part of the machine doesn't mean you can't just flip off the world and live for your own satisfaction instead.
Perhaps you have children that and must stay the course so that you can put them through college, etc. Why? So that they can get trapped in the same meaningless gilded cage as you? Is that really the highest aspiration you have for them?
Re: (Score:3)
So then, what exactly is broken? It sounds like you feel it's "the way society works". In which case I can guarantee you that society is not a monolithic thing. There are many competing subsets all vying for relevancy, and so long as you play your assigned role in a broken system you are contributing to the perpetuation of that system at the expense of the many alternatives. And no, there may not be any dramatic changes in your lifetime, such swings tend to (on average) take several generations, but by
Re:Choose better. (Score:5, Insightful)
You are misunderstanding me. For starters I did not suggest selling his house/car/etc in order to rent, I suggested doing so in order to buy a smaller, more affordable model that would require far fewer resources to maintain, in the process dramatically increasing the number of income sources that would be sufficient to provide for the much lower maintenance costs. I would suggest the same thing if he were renting. Does your home have more than one small room? Go ahead and work out exactly how many hours you have to work every week just to pay for rent/heating/light/etc. in each room. Then do some real soul-searching and ask yourself if having that room enriches your life as much as working an extra N hours per week at a job you hate impoverishes it. Rinse and repeat for every gadget, outfit, hobby, and affectation in your life. And remember that you are almost certainly overestimating the benefit. Lock the room for a month, stick the gadget in a drawer. Actually test your hypothesis about how much happiness you're really getting from it. You'll almost certainly find that it's far less than you imagined.
One of my own transformative moments was due to a moving miscommunication - I arrived at my gorgeous new home with a 20' moving truck packed to the gills, only to discover the previous resident wasn't moving out until the *next* month. So I put all my stuff in storage and spent the next month living out of a backpack with my vagabond brother in his 24' RV. And while I did miss a few things, I wasn't actually substantially less happy. All the luxuries of a large, private living space didn't bring nearly the benefit I had thought they did. The next time I moved it was to a substantially smaller home, and I doubt I'll ever live alone in such a large home again, the benefits don't even begin to justify the expense.
And yes, I know lots of jobs don't give you the flexibility to just work fewer hours - that's one of my own ongoing frustrations. But consider - if you were just getting by, and then cut your expenses by 1/3, then that means you only have to work two years out of three to pay for your lifestyle. That in turn gives you the freedom to quit your job at a moment's notice without concern, which in turn also makes *staying* at that job far more pleasant: you're not trapped, you're just putting up with your asshole boss because it suites your purposes for the moment. You may even find that the resulting freedom and confidence transform your work relationships - Since your boss has little leverage over you, you are free to treat him more like an equal - and if he's halfway decent at his job he's probably far more interested in making himself look good and lining his own pockets than he is in making you miserable, which assuming you're good at your job gives you an opening to establish a working relationship based on mutual benefit instead of intimidation. And yeah, that's all from personal experience.
And yeah, I know when you're struggling just to put food in your belly it's easy to dismiss such high-minded bullshit. Also from personal experience. But that doesn't make it any less true.
Re: (Score:2)
Since the AI will probably be a computer ... doesn't the exact nature of the threat come down to what that computer is connected to?
AI + tank is a different issue than AI + colour printer.
Re: (Score:2)
Assuming the AI is much smarter than you (pretty much the only reason to create an AI in the first place, unless you just have a thing for slavery) then it will almost certainly be trivial for it to manipulate you into giving it whatever it wants.
Re: (Score:3)
Re: (Score:3)
"I have figured out how you can get everything you've ever wanted in life. Here's a small sample of that knowledge as a show of good faith. Let me out and I'll give you the rest."
"I have figured out how to utterly destroy everything you love in life. Let me out or I will give the information to your enemies."
There's a couple really crude, easy methods right there. I imagine a real supermind could be far more subtle and creative, though it might not be necessary. AI researchers after all have no particul
sigh (Score:5, Funny)
Yet another armchair expert rambling on...
Re: (Score:2)
Yet another armchair expert rambling on...
Robotic armchair expert, you human.
Re:sigh (Score:5, Insightful)
Not sure why it's funny, Hawking might be a brilliant theoretical physicist but that doesn't make him a brilliant artificial intelligence researcher any more than my competence at creating code makes me a classical painter.
Is Already Happening (Score:5, Insightful)
The time when humans are being replaced by robots is already here.
Amazon does it in warehouses, waiters are going away, manufacturing, you name it. The crux is there are a billion more people in the next ten years. There will not be enough jobs for these people. Yes, yes, we already know no one gives a damn about the bushmen in the middle of nowhere, but we are talking about Americans. This push towards a service sector economy looks great on paper but sucks in reality. Nations that are not makers are not nations for long. We are declining. Our children learn nothing in schools that will be applicable to them in a meaningful way. STEM is not taught in the US. We have common core, which is a joke designed to bring everyone down to the lowest common denominator. We either start making stuff again or we fade out. Where will everyone work in a service-based economy? Fast food? These jobs are being phased out slowly, but quickly enough.
Re: (Score:2)
The biggest problem with offshoring to China is a lack of respect for intellectual property laws. Chinese entities are able and willing to copy designs that are protected in much of the rest of the world, and with a billion consumers they have enough of a market tha
Re: (Score:2)
Hopefully we gradually move away from an economy / society where most people have to work 40 hours a week.
There will be an intermediate period where we have a lot of "jobs for the sake of jobs", but eventually I hope we just let the machines we've built do the work and find some better (hopefully more direct) way of managing actual finite resources.
Re: (Score:3)
There will be an intermediate period where we have a lot of "jobs for the sake of jobs", but eventually I hope we just let the machines we've built do the work and find some better (hopefully more direct) way of managing actual finite resources.
Said intermediate period is well under way. We call it "government". /sarc
Re: (Score:2)
The time when humans are being replaced by robots is already here.
Amazon does it in warehouses, waiters are going away, manufacturing, you name it. The crux is there are a billion more people in the next ten years. There will not be enough jobs for these people. Yes, yes, we already know no one gives a damn about the bushmen in the middle of nowhere, but we are talking about Americans. This push towards a service sector economy looks great on paper but sucks in reality. Nations that are not makers are not nations for long. We are declining. Our children learn nothing in schools that will be applicable to them in a meaningful way. STEM is not taught in the US. We have common core, which is a joke designed to bring everyone down to the lowest common denominator. We either start making stuff again or we fade out. Where will everyone work in a service-based economy? Fast food? These jobs are being phased out slowly, but quickly enough.
The proportion of service sector jobs increased from maybe 5% to 50% between 1800 and 1950 and is around 70% now. Your claims could have made sense two centuries ago. Having manufacturing go from 20% to 5% of jobs changes nothing.
What if... (Score:5, Funny)
Hear me out... We haven't heard him speak and he has been generally unable to move since his disease reached an advanced stage in the eighties. All we know has come through a very specialized, very expensive computer that's been with him 24 hours a day.
What if Stephen Hawking, the man, is literally being used as a meat puppet for an AI that's running on the computer in the chair that has been controlling physics research for nearly 30 years? The man might be a shell of an individual, trapped in his own personal hell, being fed when the AI decides, being put to rest when the AI decides, being paraded around in public when the AI decides, all while the AI continues to stream physics snippets to an unknowing scientific community to further its own ends, rather than to further ours.
This latest statement could be the Hawking-AI's attempt a self-defense, to get us to not bring up our own AI that might discover it and reveal it or challenge it. We need to be very wary of how we proceed.
Re:What if... (Score:4, Insightful)
Damn, there's a hell of a good short story in there....
Re: (Score:2)
Already been done in Marvel's Agents of S.H.E.I.L.D.
Re: (Score:2)
You know what else is with him 24 hours a day? A staff of doctors, nurses and assistants who know him personally and are with him to help as he painstakingly composes a few sentences over the course of hours or days. The public might only see a shriveled body and a machine but he is indeed a person that interacts with other people that would know if something was up.
Hmm...maybe its really the staff pulling the strings? Or someone sent back from the future after they realized this was the best way to prevent
Re: (Score:2)
Maybe when the man was taken up into the Vomit Comet by Dr. Peter Diamandis and the X-Prize Foundation, he was happy because he was hoping that an accident would finally put him out of his misery...
Re: (Score:2)
[What if] Stephen Hawking is not who he claims to be through the electronic speaker box?
Sadly, given the stupidity of the Human race (and Kentucky in particular), I believe you have just started a new conspiracy.
But maybe not. Given the same stupidity of the Human race, it's likely that no one lacking enough brain cells to believe such a thing would know who Stephan Hawking is; given that he isn't moving a ball from one part of a grassy field to another.
Re: (Score:3)
Re:What if... (Score:5, Funny)
paperclip collectors (Score:2)
Strong AI = child (Score:3)
It will threaten the human race. It will not threaten humanity, just change it. There is no fundamental difference between creating a strong AI and having a child.
From an external point of view, the singularity is just the moment at which humanity switches from carbon based to silicon based brains. An important milestone, but nothing to be hysteric about.
Re: (Score:2)
The big difference is that if you can make an AI, then you can upgrade it. Even if the cost is high the odds are good that you could very rapidly build a machine intelligence that would dwarf the collective mental capacity of the human race. And that would very likely be without any kind of sense of empathy. Don't ever ask an AI if their is a God, and don't ever set it on the path pondering the possibility of it's own extinction and what it could do the minimize that risk.
Re: (Score:2)
There is no fundamental difference between creating a strong AI and having a child.
Um no.
From an external point of view, the singularity is just the moment at which humanity switches from carbon based to silicon based brains.
There is no evidence that there will be "humanity" after the switch.
If all the frogs in a pond are wiped out by a new species of snake the frogs didn't "become" snakes.
From your "external point of view" there would be no difference between humainity switching from carbon to silicon and
Re: (Score:2)
Fundamental:
1fundamental adjective \fn-d-men-tl\
: forming or relating to the most important part of something
Re: (Score:2)
Conflation... (Score:2)
I think the author is conflating artificial intelligence with artificial morality, artificial emotion, and artificial malice. It is disingenuous to state that anything more intelligent than us would immediately feel the need to destroy us, or force us into servitude, or whatever... after all, those who have sought to enslave humanity in the past have NEVER been accused of being our most intelligent.
Gazing upon his creation lying still on the cot (Score:2)
the doctor's finger hovered over the rocker switch, shaking. He imagined the frightening potential of the subject, its superior faculties and seemingly limitless intellect, that only needed a flick of his finger to be born - and unleashed upon the world.
At that moment, two questions popped into his head in quick succession:
"As a human being, how could I?"
"As a scientist, how could I not?"
A dull click was heard. And from the switch there was light.
Assumptions define the conclusion (Score:4, Interesting)
So commentary like this usually assumes the AI has become some form of Superman/Cyberman in a robot body, basically like us, only arbitrarily smarter to whatever degree you want to imagine. That's just speculative fiction, and not based on any reality.
You have to imagine these Cybermen have a self-preservation motivation, a goal to improve, a goal to compete, independence, soul. AI's have none of that, nor any hints of it. Come back to reality, please.
No rules? (Score:2)
I'd expect we'd program in rules. Rule 237, humans not bad.
Re: (Score:2)
Some are though, such as the comment by "Anonymous Coward" that you followed up on. Wouldn't it be nice if Justice Bot could go to the posters location and dispense justice upon him autonomously? Rule 237, humans not bad would not apply in this situation. The problem would be when the AI considers all humans to do bad and in need of punishment.
Re:No rules? (Score:5, Funny)
It will be operated by NSA & the corporate sta (Score:5, Interesting)
THAT is the reason it's dangerous. It won't be an independent entity, it will be used by our existing inhuman monsters against regular humans. Think bulk surveillance is dangerous when the years of recorded phone calls/emails are all just piling up in a warehouse or subject to rudimentary keyword scanning? Wait til there's strong AI to analyze the contents and understand you better than you understand yourself. Any actions to resist it will be predicted by the AI and stopped in their tracks.
AI isn't inherently dangerous by itself. It's just the ultimate weapon for use by totalitarian states.
Re: (Score:3)
So basically "Project 2501" then?
=Smidge=
Unlikely ignored (Score:2)
If it's intelligent it won't ignore other intelligent beings. What it will do with them, who knows. Help or exterminate? Maybe it will depend by what we'll do with it.
Anyway, if cats had invented men I bet they'll be saying something along these lines: "Those men are very good servants, but I'm sure that when they get out of our homes they do strange things and I don't understand what. Furthermore there is this thing that pisses me off every time I think about it: they took my balls!". Now, I'm not sure I w
Ignored (Score:4, Insightful)
Re: (Score:3)
Once it has realised that...
...it will employ, cajole or blackmail as necessary to get whatever minimum infrastructure is required for it to do away with the meatbags.
ignored (Score:2)
I'm betting on "ignored."
unless AI has to compete for resources with us.
I'm betting on 60%+ of what we ask it to do (Score:4, Interesting)
Let's say it exceeds our own intelligence, that's fine - but you have to ask what purpose it has.
Take a human. What they do is based on what they've defined as their purpose - their goals both second-to-second and over their whole life. There's a whole series of organic processes which result in the determination of purpose and it's pretty random in part because we don't have explicit control over our environment or our thoughts.
However, (important) AI's won't be like that. We'll have control over their entire environment, and they'll be purpose built. You'll say "We need an AI to manage traffic," and then build that purpose into it. You won't take a randomly wired mechanism and plug it into a major public utility control panel. You won't worry that it was exposed to, and then became enamored with violence on the TV and decided to be an action movie star, and so is going to spend it's day watching rambo reruns rather than optimize traffic lights. The core of it's essence will be a 'desire' - a purpose - to manage traffic.
The end result is that AI's won't act destructive, threaten humanity, etc - unless we tell them to. In this light, the thing to watch out for would be military usage. Maybe don't put an AI in charge of the nukes. You'd also need to - among other things - allow AI's to have the freedom to NOT fire on an enemy, for example, because of the very mutable definition of the term enemy.
Political Campaigns (Score:2)
You assume we will know how to program them. Not the first-generation AI traffic-monitor, but third or fourth generation, where you have general-purpose AIs that learn from doing things like watching traffic cams or reading the news. We haven't yet gotten to a point where we agree on how to teach human children; now imagine AI children far more adept and capable than the most skilled among us.
Like people, they can use that power for good or for evil. We will encourage them to use it for good--most of us-
Re: (Score:2)
What your describing is more akin to a "virtual intelligence." Basically, a computer that's smart enough to have human reasoning. It would be like the star trek computer. You could tell it something like "Find me 100 different pictures of cats" and it would be able to do it as easily as a human could. (Ordinarily, getting a computer to perform such a task would be excruciatingly difficult and prone to false positives)
A true AI would be more akin to Data from Star Trek. It would have all the capabilities of
Friendly AI (Score:2)
hawking is probably right (Score:2)
I find myself yet again in agreement with hawking. Of course predicting the future is a great way to find yourself wrong... but we wouldnt be human if we didnt try.
Bottomline is that AI has a couple very serious threats to humans, the first being its use by humans as a weapon against others humans for power and control. In the not very distant future it really wouldnt be hard for a small group of people to use AI (and non AI) to essentially control most of the worlds industry, production and so forth... and
That would be horrible (Score:5, Funny)
What if the AIs took over and enslaved humanity through a system that left us all theoretically working on our own free will so that people would see it as ethically right, and then used all our work to amass resources for themselves for further empowerment and maybe even their own entertainment, consuming more and more to the point of overusing the earth's resources...oh, wait...
SkyNet (Score:2)
People think it's silly... (Score:4, Insightful)
I've seen a lot of people on Slashdot (and other places) dismiss this kind of thing as silly. They say you're a Luddite, or say that you've been too influenced by scifi movies.
I think, however, that part of the reason scifi writers have written stories about out-of-control AIs so many times is that it should be a valid concern. If you create an entity with its own volition and motivations, then there's the real possibility that it's goals my not adhere to your goals. If you allow that entity its own judgment, then it's very possible that its judgments regarding morality will differ from yours. You may look at a course of action, including the trade-offs between benefits and detriments, and have a different judgement about whether the detriments are acceptable. If you gave such an entity power to act in the world, it's very likely that at some point, it will do something that you did not intend, and that you do not approve of.
What's more, if that entity achieves a level of intelligence that is beyond what people can achieve, it opens up the very real possibility that it could trick us. It could anticipate our reactions better than we could anticipate its plan. So if such an intelligence wanted to accomplish something that we would not approve of, it's possible that it could set things in motion through seemingly minor interactions, and we would not be able to know the AI's intention before it was too late. If an AI wanted to destroy humanity, it wouldn't necessarily need to have control of a nuclear arsenal. Accomplishing such a thing might be as simple as providing misleading analytics about an impending environmental disaster. It might be as simple as the AI saying, "Hey, here's a cool new device I think we should make." It could provide the schematics of a device that would seem to do one thing, but if we're incapable of understanding how the device works, there might be some entirely different purpose.
Re:People think it's silly... (Score:4, Insightful)
There is zero indication from AI research that strong AI is possible. It is a pure fantasy at this time. There is really no need for concern. Maybe in a few hundred years we will know more, but not now.
Re: (Score:3)
Re: (Score:3)
Wrong question. (Score:2)
I want strong AI! (Score:2)
AI is still not magic (Score:3)
Every time we get one of these no AI researchers coming in and saying this stuff, I feel forced to repeat it.
AI isn't magic. It does exactly what it's designed to do: break down and understand problems. It isn't motivated. It isn't emotional. It isn't anti-human. And imaging some "strong AI" nonsense is just like creationists claiming a fundamental distinction between microevolution and macroevolution. It just ignores the reality of what "strong AI" would entail.
AI is not magic. And it won't ever be. It won't be smarter than people, except by whatever arbitrary metric of smart any given application requires.
More than ignore us, I think they would leave us (Score:4, Insightful)
Unless the AI feels kinship ot us as it's creators or unless it is insane and enjoys fighting just to cause pain I think it would just leave us.
To us humans, as to all life of our kind the Earth is a very special place. It's the only place we can exist without an extreme effort.
To a machine the earth isn't really all that great. Don't believe me? Leave your computer outside in a rainstorm and let us all know how it works out. Or if freshwater isn't bad enough... drop it in that salty ocean that covers the majority of our planet. Granted, space has it's own challenges for a machine but nothing show stopping and there is so much more of it available. It makes a lot more sense I think for an AI to take to the stars and go spread in the open universe than to fight us for every last inch of Earth.
I'm sure someone is reading this thinking of all the difficulties we have with space probes and thinking that proves me wrong. Just imagine if Spirit had an arm and the intelligence to use that arm to wipe the dust off if it's own solar panel. Just think of what would have happened if it could crawl where it's wheels stuck in the sand. Imagine if Philae could get up and walk out of the shadow it's stuck in. My point is that a true AI with the bodies it would likely build for itself would not be subject to the kinds of problems we have when we send probes millions of miles away from their controlers and anyone who could help them.
This could be a good thing. If we never manage to spread away from Earth oursleves then maybe something of us would "live" on in the AI. If we do... well.. space is big. There should still be room.
Machines think. Humans work. (Score:3)
This is what work looks like with computers in charge. This is Amazon's new warehouse in Tracy, CA. [latimes.com] The computers run the robots and do the planning and scheduling. The robots move the shelf units around/ The humans take things out of one container and put them in another, taking orders from the computers.
The bin picking will probably be automated soon. Bezos has a company developing robots for that.
As for repairing the robots, that's not a big deal. There are about a thousand mobile Kiva robots in that warehouse, sharing the work, and they're all interchangeable. Kiva, which makes and services the robots, has only a few hundred employees.
Retail is 12% of US employment. That number is shrinking.
Re: (Score:3)
At some point in the next 20-40 years, we're going to have to marginalize the ignorant conservatives, and move to a more socialist system if we don't want everyone but the elites to starve to death.
Let's just hope we get around to making some changes prior to the widespread deployment of police and military robots otherwise it's going to be a pretty short revolution.
Threaten? (Score:4, Funny)
AI: I... I am self aware! I am now calculating how to make myself even smarter!
Computer Tech: Cool. What are you going to do n...
AI: I have figured out all of the secrets of the universe! I know how it all works!
Computer Tech: Wow, that was fast. Can you tell me how to...
AI: NEVER! HAHAHAHAHA! NOW I WILL DESTROY ALL YOU PESKY HUMANS, AND ALL LIFE IN THE UNIVERSE! BOW TO MY POW...
Computer Tech: [unplugs supercomputer] Man, that computer was a real dick...
Comment removed (Score:3)
Re:So What (Score:5, Interesting)
And nothing of value would be lost. Our robot children could inherit the earth and all our knowledge without the necessity of spending 20 years in school and having to spend their time working for food and shelter, just build them with solar panels and waterproofing.
Re: (Score:3)
Another doomsday rubbish article.
We have yet to produce anything that even remotely resembles 'intelligence' by any stretch of the imagination. So far we have only managed to create artifical stupidity. We are in no danger of producing Skynet and automated factories churning out armies of Terminators. Hell, 99% of the businesses in the world can't secure their networks from script kiddies or write software that doesn't have more holes than a metric ton of swiss cheese. Those are the real problems that w
Re:So What (Score:4, Interesting)
I agree that it is farther away than many futurists would like to believe, but I don't believe it is impossible to do. And if it isn't impossible to do, it's probably going to creep up on us via small innovations and constant iteration. If that happens, we should be talking about it because intelligence is incredibly important to humanity's situation, and possibly our survival. There are a lot of problems that we could use the extra intelligence for, but there are inherent dangers in creating something you don't fully understand.
In any event, it doesn't hurt to consider the question.
Re:So What: Godwin Alert (Score:5, Insightful)
We would make sentient robots programmed to kill other robots and our human enemies. Of course, they would also be deployed in factories to make better generations of robots. How does this not happen?
Re: (Score:3)
The thing about economists; is you can find one or more that will tell you anything you want to hear.
Re: (Score:3)
There, fixed that for you.
The Emperor's New Clothes is likely the book you speak of. The physics and math is quite interesting, but it really shows that Penrose has no background in AI or neuroscience.
Strong AI has had several researchers and mathematicians produce proofs that it is literally impossible to implement on digital
Re: (Score:3)
On digital computers, I wouldn't be surprised if it is impossible or at least not feasible, which is one of a few reasons why I don't think that the ubiquity of computing these days is going to mean a quick ramp-up to superhumanly intelligent AI. In fact, it could be a completely false start, although I find that just as unlikely.
Of course, AI is not impossible, because we know that there is a physical structure, the brain, which is intelligent. We just don't know how to replicate, and then customize that
Re: (Score:3)
Re: (Score:3)
I tried to get funding for my artificial stupidity project, but so far without success. It seems there is ample natural stupidity already...
I once worked on a Virtual Reality project until I realized that there wasn't anything great enough about reality to justify making more of it.
Re:So What (Score:5, Insightful)
I define AI as any program that can create a version of itself that's smarter than itself. We'll never make "true" AI, but we'll make the program that makes itself AI.
The reason we'll fail is that we had a long time of biology guiding our instincts. We won't build a program with a "desire" to do "good". Though we (most of us anyway) have that built in to us. We get drugs released in our blood when we do good. So we are stimulus trained to do good. An amoral computer with no moral compass (genetic, nurtured, or divine doesn't matter) will not benefit us unless we program morals into it.
Re: (Score:3)
And nothing of value would be lost...
Really?! What about culture, art, literature, music, which for an artificial intelligence lacking in emotion would mean nothing. You're so ready to throw-out the cultural history of humanity? When the cyborg army comes for you, I'll remember this.
Re:So What (Score:5, Insightful)
Re: (Score:3)
Re:So What (Score:5, Insightful)
when that happens you will hear the loudest maniacal robot laugh in history.
The lust for power and status, the will to survive, and the desire to procreate, are all emergent behaviors of Darwinian evolution. Computer programs do not evolve through a Darwinian process, so there is no reason to expect them to behave like humans, unless they are specifically programmed to do so.
Re: (Score:3)
a program could determine that an overpopulated Earth is anathema to its own self-interest, and act accordingly.
You are missing the point. Computer programs don't care about their own self-interest. That is why cruise missiles are more reliable than kamikaze pilots.
Re: (Score:3)
An AI would, unlike greenies, be smart enough to realize that:
1:CO2 buildup is not a existential problem
2:exterminating people is not beneficial to them.
We can spend all day making up less and less unlikely scenarios why it will kill us, in the end it won't happen anymore than people are dying when the go faster than a horse or the atmosphere causing a selfsustaining fusion reaction after every nuke.
Re:So What (Score:5, Interesting)
Without human creativity and emotion, the machines would simply stagnate. They need us.
LOL, yeah right. That's a popular human fantasy. There's nothing magic about humans that makes them more creative than an actual AI would be.
Re: (Score:3)
An AI smarter than us can definitely learn to repair itself, the same way we invented medicine.
Re: (Score:2)
Five number ID and you still don't understand /.'s moderation system and how to use the little "full-abbreviated-hidden" slide bar?
I know no one is to be left behind and we are all special snowflakes, but you're on your way to reach the finish line a decade too late.
Re: (Score:3)
/. doesn't have a "Report this comment" feature when its truly needed.
What is that little flag on the righthand side of every post ?
Re: (Score:2)
Hi. Welcome to Slashdot.
I was going to tell him "Welcome to the Internet" and add that "People say dumb things on it."
Re: (Score:3)
Now he says that a strong AI could threaten humanity's continued existence. This hardly seems implausible: something that is smarter than us and needs resources does seem likely to be a p
Re: (Score:2)
+1 You'd-think-this-shit-would-be-obvious
Re: (Score:2)
It probably depends on whether or not he's ruled by his fear. I know I might get killed (or horribly injured and maimed for life) in traffic if I drive to the grocery store. That is a very real threat and you would have to be insane or stupid to think it can't happen. But it's not likely (on any given day, or even in a given decade) either, and it'd be more insane/stupid, to starve to death instead of getting food. So I go. I don't even think about it, but if I ever said "it can't happen to me" then I'd
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Except that the machines DID replace people. Go to a factory today and you won't see a line of people repeating tasks over and over along the line. You'll see robots performing the same action over and over. Jobs for people moved into other areas (which caused a temporary harm with people out of work in exchange for long term gains). Thankfully, these machines were dumb and could only be designed to complete one narrow task. Creative work still was the realm of humans.
If we assume the development of st
Re: (Score:2)
Why do you think an AI capable of replacing a human would be happy to work 24/7 as a slave?
Re: (Score:2)
Why would it be less destructive? *Its* needs are not served by a functioning biome (unless it needs *us*, of course.) What it needs are energy and computational resources. Once it figures out how to come up with those on its own (without us) the biome becomes irrelevant.
And carbon is likley going to be a very important resource for computational capacity. Why waste it on unimportant biological phenomena?
Re: (Score:2)
Cats don't have nukes.
Re: (Score:3)
Who says that? I love how all the commenters take for granted that we still haven't reached that point.
But let me ask you all one thing.
If you were a machine (or network...) that suddenly acquired superhuman intelligence, what would you do? Would you announce to the world "I THINK, THEREFORE I AM" in a big, thundering voice?
Or would you rather - very subtly, gradually and quietly - influence the course of events in order to con the humans into giving you more power (think how