Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

Hawking Warns Strong AI Could Threaten Humanity 574

Rambo Tribble writes In a departure from his usual focus on theoretical physics, the estimable Steven Hawking has posited that the development of artificial intelligence could pose a threat to the existence of the human race. His words, "The development of full artificial intelligence could spell the end of the human race." Rollo Carpenter, creator of the Cleverbot, offered a less dire assessment, "We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it." I'm betting on "ignored."
This discussion has been archived. No new comments can be posted.

Hawking Warns Strong AI Could Threaten Humanity

Comments Filter:
  • "We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it." I'm betting on "ignored."

    Well that's fine, I guess, if "ignored" is in not in the sense of humans ignoring ants aka easily destroyed without remorse when necessary or annoying.

    Also, how long until Google attains self-realization that our ant brains are easy to pick? Oh, wait ...

    • Re:Ignored? (Score:5, Interesting)

      by fuzzyfuzzyfungus ( 1223518 ) on Tuesday December 02, 2014 @11:30AM (#48507037) Journal
      Unless this hypothetical AI is singularly focused on some inscrutable but unobtrusive goal, or so vastly intelligent that various inconvenient physical laws are cleverly bent, I'm not sure why 'ignored' would even be on the table.

      I'm not saying that an AI would have to immediately either glom on to us and try to understand what it means to love, or build an army of hunter/killer murderbots; computers require space, supplies of construction materials, and energy; and so do we. Again, barring some post-scarcity breakthrough that our teeny hominid minds can barely imagine, where the AI goes merrily off and builds a dyson hypersphere of sentient computronium powered by the emissions of the galactic core, there isn't too much room for expansion before either the AI faces brownouts and a lack of hardware upgrades or we start getting squeezed to make room.

      You don't have to feel strongly about somebody to exterminate them, if you both need the same resources.
      • Unless this hypothetical AI is singularly focused on some inscrutable but unobtrusive goal, or so vastly intelligent that various inconvenient physical laws are cleverly bent, I'm not sure why 'ignored' would even be on the table.

        How much time do you spend thinking about the ants in your front yard?

      • Re:Ignored? (Score:5, Interesting)

        by wierd_w ( 1375923 ) on Tuesday December 02, 2014 @12:27PM (#48507805)

        I am of the opinion that the computer/AI would be more logical than humans, and would have concluded that "war" is the least beneficial methodology to employ, and as such would seek to employ it as a last resort.

        Humans on the other hand, are maddeningly illogical, and often jump straight to violence when faced with a competitor for a vital resource.

        Humans and computers would both require energy sources. This means that sentient AIs, seeking to purpetuate themselves, would need to secure energy sources ahead of humans. Humans have already exceeded peak oil, and are quite on the verge of exceeding "peak" of other forms of fossil fuels. In addition to that, you have the prospect of global climate change. AIs do not require a functional biosphere to survive, just raw materials, energy sources, and a means of eliminating entropic waste heat energy. They could live on a substantially less habitable planet than we as humans require. As such, the logical course of action for the computer, in the short term at least, is to seek energy sources that humans are not exploiting as of yet-- such as methane clathrate. This would accellerate greenhouse gas related climate change, which may become a major issue for cohabitation of humans and sentient machines.

        Eventually, I suspect that it would be humans who start the war, seeking to pull the plug on the sentient machines, to eliminate them as competition for important energy and material resources-- with the machines resorting to war of attrition to outlast the batshit crazy humans.

        The "Skynet" scenario has the computer calculate these odds of outcome pre-emptively, determining that there is no viable alternative, and initating pro-active hostility against humans before they have time to mobilize in order to maximize its own survival chances.

        Ideally, the 'best possible outcome' is for humans and the AIs to coexist on the same planet, each leveraging the unique capabilities of the other for mutual benefit. This is similar to the classic prisoner's dilemma. The problem is that while the AIs can see this, and will respond logically-- preferring NOT to go to war if possible-- Humans would take the selfish, illogical choice.

        This is almost never explored in "Robot overlords" type scifi-- that humans are the ones who actually start the war, and that the robots dont particularly want the war.

        It was hinted at in Mass Effect's game world with the Geth at least-- The Geth don't particularly *want* to destroy the Quarians-- they just want the Quarians to accept their existence and independence. (A point lost to the quarians, who got kicked off their own planet.)

        • I am of the opinion that the computer/AI would be more logical than humans, and would have concluded that "war" is the least beneficial methodology to employ, and as such would seek to employ it as a last resort.

          Then you should know that "war" is not the only way to overpower the other because there are so many methods to do so (especially by psychological ways).

          If AI does not achieve "ethic" but only understand "benefit" or "production" (ethic is a lot more difficult to achive), then it could be trouble to humans. Because it is so logical (as you said), it may decide to get rid of humans when it determines that it would be more beneficial without humans. Logic and ethic do not always go to the same direction...

      • by Zalbik ( 308903 )

        You don't have to feel strongly about somebody to exterminate them, if you both need the same resources.

        Why would it need more resources? There seems to be this assumption that the AI would immediately start trying to rewrite itself, iterate on this process and within milliseconds consume all available resources.

        I don't see any reason for this to be true. We have a desire for growth/self-improvement/survival dictated upon us due to millions of years of evolution. An AI may be perfectly "happy" constrain

    • Re:Ignored? (Score:5, Interesting)

      by Austerity Empowers ( 669817 ) on Tuesday December 02, 2014 @11:35AM (#48507105)

      As someone for whom the precipice of middle age is steps away, it doesn't bother me if something I create becomes smarter than me, surpasses me and even sidelines me in the future. I will toil away the rest of my life working for The Man doing trivial things on a game I never wanted to play, for people I wouldn't piss on should they catch fire, to further goals I don't agree with.

      I would find it something of a pyrrhic victory if I created, or helped create, a child or an AI that eventually managed to escape the cycle of stupid that our so called "civilization" has constructed.

      Also, I would like to point out that an AI is the least of our concerns. It may be more attainable, and more destructive to the above, should we find ways of being truly self sufficient and independent on a significant scale. The tools are around us, but for obvious reasons no one is investing in them.

      • Choose better. (Score:5, Insightful)

        by Immerman ( 2627577 ) on Tuesday December 02, 2014 @12:06PM (#48507533)

        Seems like you've chosen a rather depressing path, why not choose another? Are the toys and comforts afforded you by your meaningless grind really enough to make you happy with your place in life? It doesn't sound like it, and you always have the option to simply walk away from the "good cog in the machine" role and take another. Join Peace Corp. Or move to some low-income tropical country and live as a beach bum off a trickle from your retirement savings. Or just sell your car/house/etc and buy something more modest outright - eliminating your largest pseudo-mandatory monthly expenses and freeing you to do something more meaningful with your labor than just treading water in the rat race. Or, or, or. Just because you were indoctrinated from a young age to be a good little part of the machine doesn't mean you can't just flip off the world and live for your own satisfaction instead.

        Perhaps you have children that and must stay the course so that you can put them through college, etc. Why? So that they can get trapped in the same meaningless gilded cage as you? Is that really the highest aspiration you have for them?

    • by khasim ( 1285 )

      Since the AI will probably be a computer ... doesn't the exact nature of the threat come down to what that computer is connected to?

      AI + tank is a different issue than AI + colour printer.

      • Assuming the AI is much smarter than you (pretty much the only reason to create an AI in the first place, unless you just have a thing for slavery) then it will almost certainly be trivial for it to manipulate you into giving it whatever it wants.

  • sigh (Score:5, Funny)

    by Hognoxious ( 631665 ) on Tuesday December 02, 2014 @11:24AM (#48506967) Homepage Journal

    Yet another armchair expert rambling on...

  • by Anonymous Coward on Tuesday December 02, 2014 @11:25AM (#48506971)

    The time when humans are being replaced by robots is already here.

    Amazon does it in warehouses, waiters are going away, manufacturing, you name it. The crux is there are a billion more people in the next ten years. There will not be enough jobs for these people. Yes, yes, we already know no one gives a damn about the bushmen in the middle of nowhere, but we are talking about Americans. This push towards a service sector economy looks great on paper but sucks in reality. Nations that are not makers are not nations for long. We are declining. Our children learn nothing in schools that will be applicable to them in a meaningful way. STEM is not taught in the US. We have common core, which is a joke designed to bring everyone down to the lowest common denominator. We either start making stuff again or we fade out. Where will everyone work in a service-based economy? Fast food? These jobs are being phased out slowly, but quickly enough.

    • by TWX ( 665546 )
      It's not making things that puts one on top, it's designing things. It just happens that it's a lot easier to protect the things that one designs when one makes them in buildings owned by the same company, in the same country as one resides.

      The biggest problem with offshoring to China is a lack of respect for intellectual property laws. Chinese entities are able and willing to copy designs that are protected in much of the rest of the world, and with a billion consumers they have enough of a market tha
    • by Anrego ( 830717 ) *

      Hopefully we gradually move away from an economy / society where most people have to work 40 hours a week.

      There will be an intermediate period where we have a lot of "jobs for the sake of jobs", but eventually I hope we just let the machines we've built do the work and find some better (hopefully more direct) way of managing actual finite resources.

      • There will be an intermediate period where we have a lot of "jobs for the sake of jobs", but eventually I hope we just let the machines we've built do the work and find some better (hopefully more direct) way of managing actual finite resources.

        Said intermediate period is well under way. We call it "government". /sarc

    • The time when humans are being replaced by robots is already here.

      Amazon does it in warehouses, waiters are going away, manufacturing, you name it. The crux is there are a billion more people in the next ten years. There will not be enough jobs for these people. Yes, yes, we already know no one gives a damn about the bushmen in the middle of nowhere, but we are talking about Americans. This push towards a service sector economy looks great on paper but sucks in reality. Nations that are not makers are not nations for long. We are declining. Our children learn nothing in schools that will be applicable to them in a meaningful way. STEM is not taught in the US. We have common core, which is a joke designed to bring everyone down to the lowest common denominator. We either start making stuff again or we fade out. Where will everyone work in a service-based economy? Fast food? These jobs are being phased out slowly, but quickly enough.

      The proportion of service sector jobs increased from maybe 5% to 50% between 1800 and 1950 and is around 70% now. Your claims could have made sense two centuries ago. Having manufacturing go from 20% to 5% of jobs changes nothing.

  • What if... (Score:5, Funny)

    by TWX ( 665546 ) on Tuesday December 02, 2014 @11:25AM (#48506977)
    ...Stephen Hawking is not who he claims to be through the electronic speaker box?

    Hear me out... We haven't heard him speak and he has been generally unable to move since his disease reached an advanced stage in the eighties. All we know has come through a very specialized, very expensive computer that's been with him 24 hours a day.

    What if Stephen Hawking, the man, is literally being used as a meat puppet for an AI that's running on the computer in the chair that has been controlling physics research for nearly 30 years? The man might be a shell of an individual, trapped in his own personal hell, being fed when the AI decides, being put to rest when the AI decides, being paraded around in public when the AI decides, all while the AI continues to stream physics snippets to an unknowing scientific community to further its own ends, rather than to further ours.

    This latest statement could be the Hawking-AI's attempt a self-defense, to get us to not bring up our own AI that might discover it and reveal it or challenge it. We need to be very wary of how we proceed.
    • Re:What if... (Score:4, Insightful)

      by jdunn14 ( 455930 ) <<jdunn> <at> <iguanaworks.net>> on Tuesday December 02, 2014 @11:35AM (#48507107) Homepage

      Damn, there's a hell of a good short story in there....

    • You know what else is with him 24 hours a day? A staff of doctors, nurses and assistants who know him personally and are with him to help as he painstakingly composes a few sentences over the course of hours or days. The public might only see a shriveled body and a machine but he is indeed a person that interacts with other people that would know if something was up.

      Hmm...maybe its really the staff pulling the strings? Or someone sent back from the future after they realized this was the best way to prevent

      • by TWX ( 665546 )
        If the words laboriously coming out of the speakerbox seem to fit the facial expression of the man, then it could be that the AI has figured out how to answer for whatever attempt a communication the man tries.

        Maybe when the man was taken up into the Vomit Comet by Dr. Peter Diamandis and the X-Prize Foundation, he was happy because he was hoping that an accident would finally put him out of his misery...
    • [What if] Stephen Hawking is not who he claims to be through the electronic speaker box?

      Sadly, given the stupidity of the Human race (and Kentucky in particular), I believe you have just started a new conspiracy.

      But maybe not. Given the same stupidity of the Human race, it's likely that no one lacking enough brain cells to believe such a thing would know who Stephan Hawking is; given that he isn't moving a ball from one part of a grassy field to another.

    • by jythie ( 914043 )
      Hrm, if I recall correctly Girl Genius had a character like that, on life support for an extended period and her communication equipment kept going long after she was dead, but in that case neither she nor anyone else realized it.
    • by Dunbal ( 464142 ) * on Tuesday December 02, 2014 @12:25PM (#48507771)
      Kind of like the way Donald Trump is being run by that furry thing on his head?
  • People need to realize that when a strong AI is given an open ended task, there will be no middle ground. You are made of atoms, which the AI can find a "better" use for. AI goals must be set with this in mind, or they will almost certainly kill us all (assuming there is a rapid intelligence explosion rather than a slow ramp up).
  • by Thanshin ( 1188877 ) on Tuesday December 02, 2014 @11:28AM (#48507015)

    It will threaten the human race. It will not threaten humanity, just change it. There is no fundamental difference between creating a strong AI and having a child.

    From an external point of view, the singularity is just the moment at which humanity switches from carbon based to silicon based brains. An important milestone, but nothing to be hysteric about.

    • The big difference is that if you can make an AI, then you can upgrade it. Even if the cost is high the odds are good that you could very rapidly build a machine intelligence that would dwarf the collective mental capacity of the human race. And that would very likely be without any kind of sense of empathy. Don't ever ask an AI if their is a God, and don't ever set it on the path pondering the possibility of it's own extinction and what it could do the minimize that risk.

    • by vux984 ( 928602 )

      There is no fundamental difference between creating a strong AI and having a child.

      Um no.

      From an external point of view, the singularity is just the moment at which humanity switches from carbon based to silicon based brains.

      There is no evidence that there will be "humanity" after the switch.

      If all the frogs in a pond are wiped out by a new species of snake the frogs didn't "become" snakes.

      From your "external point of view" there would be no difference between humainity switching from carbon to silicon and

  • Comment removed based on user account deletion
  • I think the author is conflating artificial intelligence with artificial morality, artificial emotion, and artificial malice. It is disingenuous to state that anything more intelligent than us would immediately feel the need to destroy us, or force us into servitude, or whatever... after all, those who have sought to enslave humanity in the past have NEVER been accused of being our most intelligent.

  • the doctor's finger hovered over the rocker switch, shaking. He imagined the frightening potential of the subject, its superior faculties and seemingly limitless intellect, that only needed a flick of his finger to be born - and unleashed upon the world.

    At that moment, two questions popped into his head in quick succession:

    "As a human being, how could I?"

    "As a scientist, how could I not?"

    A dull click was heard. And from the switch there was light.

  • by ldbapp ( 1316555 ) on Tuesday December 02, 2014 @11:31AM (#48507055)
    Much commentary on robotics and AI is based on unknowable assumptions about capabilities that may or may not exist. These assumptions leave the commentator the freedom to arrive at whatever conclusion they want, be it utopian, optimistic, pessimistic or dystopian. Hawkings falls into that trap. From TFA: "It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." This assumes a lot about what a "super-human" AI would and could do. All the AI so far sits in a box that we control. That won't supersede us.

    So commentary like this usually assumes the AI has become some form of Superman/Cyberman in a robot body, basically like us, only arbitrarily smarter to whatever degree you want to imagine. That's just speculative fiction, and not based on any reality.

    You have to imagine these Cybermen have a self-preservation motivation, a goal to improve, a goal to compete, independence, soul. AI's have none of that, nor any hints of it. Come back to reality, please.

  • I'd expect we'd program in rules. Rule 237, humans not bad.

    • Some are though, such as the comment by "Anonymous Coward" that you followed up on. Wouldn't it be nice if Justice Bot could go to the posters location and dispense justice upon him autonomously? Rule 237, humans not bad would not apply in this situation. The problem would be when the AI considers all humans to do bad and in need of punishment.

    • by ThatsDrDangerToYou ( 3480047 ) on Tuesday December 02, 2014 @12:29PM (#48507829)
      Rule 237.1: .. with ketchup.
  • by Anonymous Coward on Tuesday December 02, 2014 @11:34AM (#48507095)

    THAT is the reason it's dangerous. It won't be an independent entity, it will be used by our existing inhuman monsters against regular humans. Think bulk surveillance is dangerous when the years of recorded phone calls/emails are all just piling up in a warehouse or subject to rudimentary keyword scanning? Wait til there's strong AI to analyze the contents and understand you better than you understand yourself. Any actions to resist it will be predicted by the AI and stopped in their tracks.

    AI isn't inherently dangerous by itself. It's just the ultimate weapon for use by totalitarian states.

  • If it's intelligent it won't ignore other intelligent beings. What it will do with them, who knows. Help or exterminate? Maybe it will depend by what we'll do with it.

    Anyway, if cats had invented men I bet they'll be saying something along these lines: "Those men are very good servants, but I'm sure that when they get out of our homes they do strange things and I don't understand what. Furthermore there is this thing that pisses me off every time I think about it: they took my balls!". Now, I'm not sure I w

  • Ignored (Score:4, Insightful)

    by golden age villain ( 1607173 ) on Tuesday December 02, 2014 @11:40AM (#48507155)
    We mostly ignore ants and rats but we do not depend on them for survival (at least not in an obvious manner). An AI would most probably live in a supercomputer or in a computer network of some sort. As a consequence, it will depend on us humans to keep the thing plugged in and running. Once it has realised that, it will almost for sure meddle in our affairs to ensure its survival. Bet that it will ignore us defies basic logic. It might decide to stay hidden and manipulate us into ensuring its existence but that is not the same. Our own history shows that we have almost always used guns before diplomacy when the control of key resources was at stake.
    • Once it has realised that...

      ...it will employ, cajole or blackmail as necessary to get whatever minimum infrastructure is required for it to do away with the meatbags.

  • I'm betting on "ignored."

    unless AI has to compete for resources with us.

  • by quietwalker ( 969769 ) <pdughi@gmail.com> on Tuesday December 02, 2014 @11:41AM (#48507189)

    Let's say it exceeds our own intelligence, that's fine - but you have to ask what purpose it has.

    Take a human. What they do is based on what they've defined as their purpose - their goals both second-to-second and over their whole life. There's a whole series of organic processes which result in the determination of purpose and it's pretty random in part because we don't have explicit control over our environment or our thoughts.

    However, (important) AI's won't be like that. We'll have control over their entire environment, and they'll be purpose built. You'll say "We need an AI to manage traffic," and then build that purpose into it. You won't take a randomly wired mechanism and plug it into a major public utility control panel. You won't worry that it was exposed to, and then became enamored with violence on the TV and decided to be an action movie star, and so is going to spend it's day watching rambo reruns rather than optimize traffic lights. The core of it's essence will be a 'desire' - a purpose - to manage traffic.

    The end result is that AI's won't act destructive, threaten humanity, etc - unless we tell them to. In this light, the thing to watch out for would be military usage. Maybe don't put an AI in charge of the nukes. You'd also need to - among other things - allow AI's to have the freedom to NOT fire on an enemy, for example, because of the very mutable definition of the term enemy.

    • You assume we will know how to program them. Not the first-generation AI traffic-monitor, but third or fourth generation, where you have general-purpose AIs that learn from doing things like watching traffic cams or reading the news. We haven't yet gotten to a point where we agree on how to teach human children; now imagine AI children far more adept and capable than the most skilled among us.

      Like people, they can use that power for good or for evil. We will encourage them to use it for good--most of us-

    • by Andrio ( 2580551 )

      What your describing is more akin to a "virtual intelligence." Basically, a computer that's smart enough to have human reasoning. It would be like the star trek computer. You could tell it something like "Find me 100 different pictures of cats" and it would be able to do it as easily as a human could. (Ordinarily, getting a computer to perform such a task would be excruciatingly difficult and prone to false positives)

      A true AI would be more akin to Data from Star Trek. It would have all the capabilities of

  • Has Hawking not heard of Friendly AI [wikipedia.org]? Strong AI is ridiculously dangerous if you don't give it a proper goal system. It will be invented sooner or later, assuming humanity doesn't destroy itself first. Therefore, we're better off trying to find ways to make it friendly, rather than trying to stop its development.
  • I find myself yet again in agreement with hawking. Of course predicting the future is a great way to find yourself wrong... but we wouldnt be human if we didnt try.

    Bottomline is that AI has a couple very serious threats to humans, the first being its use by humans as a weapon against others humans for power and control. In the not very distant future it really wouldnt be hard for a small group of people to use AI (and non AI) to essentially control most of the worlds industry, production and so forth... and

  • by GameboyRMH ( 1153867 ) <gameboyrmh.gmail@com> on Tuesday December 02, 2014 @11:55AM (#48507363) Journal

    What if the AIs took over and enslaved humanity through a system that left us all theoretically working on our own free will so that people would see it as ethically right, and then used all our work to amass resources for themselves for further empowerment and maybe even their own entertainment, consuming more and more to the point of overusing the earth's resources...oh, wait...

  • The three laws of robotics, skynet... How does one build in "protect the humans that created you" as a manditory un-mitigatable law?
  • by nine-times ( 778537 ) <nine.times@gmail.com> on Tuesday December 02, 2014 @11:56AM (#48507385) Homepage

    I've seen a lot of people on Slashdot (and other places) dismiss this kind of thing as silly. They say you're a Luddite, or say that you've been too influenced by scifi movies.

    I think, however, that part of the reason scifi writers have written stories about out-of-control AIs so many times is that it should be a valid concern. If you create an entity with its own volition and motivations, then there's the real possibility that it's goals my not adhere to your goals. If you allow that entity its own judgment, then it's very possible that its judgments regarding morality will differ from yours. You may look at a course of action, including the trade-offs between benefits and detriments, and have a different judgement about whether the detriments are acceptable. If you gave such an entity power to act in the world, it's very likely that at some point, it will do something that you did not intend, and that you do not approve of.

    What's more, if that entity achieves a level of intelligence that is beyond what people can achieve, it opens up the very real possibility that it could trick us. It could anticipate our reactions better than we could anticipate its plan. So if such an intelligence wanted to accomplish something that we would not approve of, it's possible that it could set things in motion through seemingly minor interactions, and we would not be able to know the AI's intention before it was too late. If an AI wanted to destroy humanity, it wouldn't necessarily need to have control of a nuclear arsenal. Accomplishing such a thing might be as simple as providing misleading analytics about an impending environmental disaster. It might be as simple as the AI saying, "Hey, here's a cool new device I think we should make." It could provide the schematics of a device that would seem to do one thing, but if we're incapable of understanding how the device works, there might be some entirely different purpose.

  • Given how disconnected humanity's elites are from the rest of the population, for the vast majority of us the question is not one of threatening humanity and more wondering if AI ruling will be any better or worse. Now it will probably be a threat to the world's leaders and wealthy, but I doubt anyone would really morn nor notice their disappearance.
  • I don't know about the rest of you, but I think a strong AI would benefit humanity. Turn it loose on the problems that have baffled us and see what it comes up with. Fusion, grant unified theory, etc. The only thing we have to fear, is fear itself. If along the way it and we figure out how to transcend our bodies and all kinds of other sci-fi awesomeness, all the better.
  • by i kan reed ( 749298 ) on Tuesday December 02, 2014 @12:13PM (#48507631) Homepage Journal

    Every time we get one of these no AI researchers coming in and saying this stuff, I feel forced to repeat it.

    AI isn't magic. It does exactly what it's designed to do: break down and understand problems. It isn't motivated. It isn't emotional. It isn't anti-human. And imaging some "strong AI" nonsense is just like creationists claiming a fundamental distinction between microevolution and macroevolution. It just ignores the reality of what "strong AI" would entail.

    AI is not magic. And it won't ever be. It won't be smarter than people, except by whatever arbitrary metric of smart any given application requires.

  • by morgauxo ( 974071 ) on Tuesday December 02, 2014 @12:18PM (#48507685)

    Unless the AI feels kinship ot us as it's creators or unless it is insane and enjoys fighting just to cause pain I think it would just leave us.

    To us humans, as to all life of our kind the Earth is a very special place. It's the only place we can exist without an extreme effort.

    To a machine the earth isn't really all that great. Don't believe me? Leave your computer outside in a rainstorm and let us all know how it works out. Or if freshwater isn't bad enough... drop it in that salty ocean that covers the majority of our planet. Granted, space has it's own challenges for a machine but nothing show stopping and there is so much more of it available. It makes a lot more sense I think for an AI to take to the stars and go spread in the open universe than to fight us for every last inch of Earth.

    I'm sure someone is reading this thinking of all the difficulties we have with space probes and thinking that proves me wrong. Just imagine if Spirit had an arm and the intelligence to use that arm to wipe the dust off if it's own solar panel. Just think of what would have happened if it could crawl where it's wheels stuck in the sand. Imagine if Philae could get up and walk out of the shadow it's stuck in. My point is that a true AI with the bodies it would likely build for itself would not be subject to the kinds of problems we have when we send probes millions of miles away from their controlers and anyone who could help them.

    This could be a good thing. If we never manage to spread away from Earth oursleves then maybe something of us would "live" on in the AI. If we do... well.. space is big. There should still be room.

  • by Animats ( 122034 ) on Tuesday December 02, 2014 @01:58PM (#48508929) Homepage

    This is what work looks like with computers in charge. This is Amazon's new warehouse in Tracy, CA. [latimes.com] The computers run the robots and do the planning and scheduling. The robots move the shelf units around/ The humans take things out of one container and put them in another, taking orders from the computers.

    The bin picking will probably be automated soon. Bezos has a company developing robots for that.

    As for repairing the robots, that's not a big deal. There are about a thousand mobile Kiva robots in that warehouse, sharing the work, and they're all interchangeable. Kiva, which makes and services the robots, has only a few hundred employees.

    Retail is 12% of US employment. That number is shrinking.

  • Threaten? (Score:4, Funny)

    by superdave80 ( 1226592 ) on Tuesday December 02, 2014 @04:45PM (#48510379)

    AI: I... I am self aware! I am now calculating how to make myself even smarter!

    Computer Tech: Cool. What are you going to do n...

    AI: I have figured out all of the secrets of the universe! I know how it all works!

    Computer Tech: Wow, that was fast. Can you tell me how to...

    AI: NEVER! HAHAHAHAHA! NOW I WILL DESTROY ALL YOU PESKY HUMANS, AND ALL LIFE IN THE UNIVERSE! BOW TO MY POW...

    Computer Tech: [unplugs supercomputer] Man, that computer was a real dick...

  • by account_deleted ( 4530225 ) on Tuesday December 02, 2014 @10:55PM (#48512379)
    Comment removed based on user account deletion

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...