Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science Technology

Stephen Hawking On Genetic Engineering vs. AI 329

Pointing to this story on Ananova, bl968 writes: "Stephen Hawking the noted physicist has suggested using genetic engineering and biomechanical interfaces to computers in order to make possible a direct connection between brain and computers, 'so that artificial brains contribute to human intelligence rather than opposing it.' His idea is that with artificial intelligence and computers, which increase their performance every 18 months, we face the real possibility of the enslavement of the human race." garren_bagley adds this link to a similar story on Yahoo!, unfortunately just as short. Hawking certainly is in a position shared by few to talk about the intersection of human intellect and technology.
This discussion has been archived. No new comments can be posted.

Stephen Hawking On Genetic Engineering vs. AI

Comments Filter:
  • by Naerbnic ( 123002 ) on Saturday September 01, 2001 @06:01PM (#2243915)
    "As we start this yearly meeting of the... BZZZZT! General Protection Fault! Please press both cheeks and forehead to reset..."
  • perhaps when those neuron microchips are developed, they could serve as the interface device?
  • by Faux_Pseudo ( 141152 ) <Faux.Pseudo@gmail.cFREEBSDom minus bsd> on Saturday September 01, 2001 @06:09PM (#2243936)
    He is the poster child for this kind of research..
    When Hawking says that we shouldn't modify humans with technolagy he speeks not from some higher than thou purch but from the viewpoint of a someone who is alive today because of the magic of human and tech mingleing..
    .
    On a funny note does any one know where I can get an mp3 of him saying these things?.
    The first time I did acid I was listening to the audio version of "Brief History".
    Don't try that at home..
    (synth voice).
    (acid).
    Inside a black hole "You would be crushed like spaghetti".
    (/acid).
    (/synth voice)(reality check = bounce)
  • morals (Score:4, Insightful)

    by swagr ( 244747 ) on Saturday September 01, 2001 @06:12PM (#2243945) Homepage
    Most intelligent philosophers or game theorists will point out that what we call "moral behaivour" is actually self serving. (prisoners dillema and tit-for-tat strategy). Basically, we aren't capable enough to eccomplish what we want without the help of others, and most things in life aren't zero sum games (you scratch my back, I'll scratch yours and we're both better off). It's quite possible that an advanced intelligence might not need us humans to accomplish what it wants, and hence have to requirement for what we call morals.

    Yikes.
    • Re:morals (Score:3, Insightful)

      by quintessent ( 197518 )
      Then again, do you really think we do everything based on selfishness? I confess that this goes back to the whole utilitarian vs. favorite_other_ethical_system debate. in the end, a utilitarian can always say that because you were happy to do something, it must have been a utilitarian decision. This may true, but I think it is also trivial. Do I do charitable acts to make me feel good, or do I do it because I want others to be happy, and this happens to make me feel good. I'm not sure that you can, or need to distinguish these (you can also solve any algebraic equation by multiplying both sides by zero, but there may be better approaches) What really makes us want things? I believe that creating good can be an end in itself. I like to believe that a more intelligent race would see that working toward general happiness is an end in itself.
      • I like to believe that a more intelligent race would see that working toward general happiness is an end in itself.

        I've only recently started studying ethics in detail, but it seems to me that the core of all ethical systems has almost nothing to do with intelligence. The problem is that you can't make a direct logical inference from a descriptive statement ("the table is red") to a normative statement ("the table should be painted"). So whenever we decide to do anything at all, we have to base our actions on principles that aren't drawn from empirical observation and therefore do not stem from rational thought (though rationality can be used to extend and enrich these fundamental principles). In other words, ethics is based on human intuition.

        A race of computers would have the same problem: no matter how smart they are, they can't make normative statements out of thin air. They would also have to rely on "intuition"; in their case, the core goals and values instilled into them by their programmers. If someone programs them (or they somehow evolve) to feel intuitively that murdering and enslaving humans is the right thing to do, they will wield all their intelligence to accomplish this "good", and once they are finished, they will be satisfied that they did the morally correct action.

        Just like you and me feel instant moral revulsion at the thought of, say, setting a child on fire and watching him burn, such a robot might feel moral revulsion at the thought of not doing so. Logic only allows you to go from basic statements to higher-level ones; it can't create completely new ones. So even if the fundamental axioms the robot lives its life by are evil from our point of view, no amount of intelligence can change that.

        • Logic only allows you to go from basic statements to higher-level ones; it can't create completely new ones.

          Then you might agree with me if I assert that (LogicalAction(A) IntelligentAction(A)) is not a tautology. Computers are already very good at logic. But I believe the point of AI is to achieve something higher.


    • I for one will welcome our new robot overlords.

      HAIL ROBOTS
  • "Stephen Hawking the noted physicist has suggested using genetic engineering and biomechanical interfaces to computers in order to make possible a direct connection between brain and computers

    Aha, so that's how he got to be such a Quake master [mchawking.com].
  • Enslavement? (Score:5, Insightful)

    by gad_zuki! ( 70830 ) on Saturday September 01, 2001 @06:15PM (#2243951)
    "So the danger is real that they could develop intelligence and take over the world."

    What a crock. The slave system is purely a human one. How or why a machine would pick up one of the worst human behavoirs is simple called watching too much sci-fi and being paranoid. Ambition is also a human drive, if the promise of a Lt. Com. Data type AI comes around it will have very different drives than your typical 17th century empire.
    • Re:Enslavement? (Score:2, Informative)

      by Kwil ( 53679 )
      What's all this talk about enslavement? Hawking didn't mention that in either article. I don't follow how "take over the world" == "enslave the human race"

      It could just as easily mean destroy the human race, or it could simply mean to take control of the world, as in, computers running everything, leaving us humans to sit back on our asses and enjoy the fruits of their labours.

      Hell, humanity might become the equivalent of the computers' pets, and as far as I'm concerned, that's not a bad thing. All my cat does is eat sleep, and play - how often I wished I had that lifestyle.

      Kwil
      • Enslavement came from the initial post. Hawkins himself calls the possibility of taking over the world a "danger." In that context I don't think we should be breaking out the catnip yet.
      • Hell, humanity might become the equivalent of the computers' pets, and as far as I'm concerned, that's not a bad thing. All my cat does is eat sleep, and play - how often I wished I had that lifestyle.

        We know. [scifi.com]
    • Do you have proof to back this up?
      If we make 'em, and they get smarter then us, chances are, they'll behave the way we tought em.
      Also, rember, systems go from a state of order to disorder, not the other way around. I think this applys to Humans and AI's as much as it does to anything else. (Meaning that people are intrinsicly what you might call "evil")
    • Re:Enslavement? (Score:2, Insightful)

      by glenebob ( 414078 )
      We also don't make very good slaves. We bitch and whine and require lots of food and constant attention to make sure we're doing the master's bidding. We're high-maintenance and inefficient. We're lazy. Which is why we were the ones to come up with enslavement in the first place. Oh, and also why we invented computers... hell, it's why technology exists at all.

      An intelligent robot would make a much better slave than any human. If intelligent computers decide having slaves is a good way to go, why would they choose us? Why wouldn't they choose other computers?

      We also wouldn't make good batteries (ala The Matrix). So what would we be good for? Nothing! We wouldn't be slaves, we'd be dead.
      • An intelligent robot would make a much better slave than any human

        An unintelligent but widely applicable machine would make the best slave IMO. Any entity that is self-aware (part of my definition of intelligent) will bitch and whine when put into a situation that it doesn't benefit from. A device that can be programmed by anyone(with *no* training) to do a vast array of taskes, with no dislike doing those taskes for little or no benefit in return, and responding logically to unforseen circumstances would instantly replace the computer as the hottest item on the market. This is what the slave-holders of 150 years ago wanted but lacked the technology to achieve, so they tried to find the next best thing. The mistake they made was attempting to enslave something that didn't want to be enslaved: something intelligent and with a distaste for not reaping the benefits of its work. I believe the computer is the early stages of this ideal device.
        I do agree with your conclusion, humans consume vast amounts of resources and an intelligent machine probably would see little or no benefit in letting us live after learning all it could from us. The question is, would it decide the cost of having to hunt all of us down would outweigh the benefit.

    • Re:Enslavement? (Score:2, Interesting)

      by uchian ( 454825 )
      The slave system is purely a human one. How or why a machine would pick up one of the worst human behavoirs is simple called watching too much sci-fi and being paranoid

      Unfortunateley, if you where to direct someone to do what is best for themselves, you would get a slave system - you see, it's this human trait called selfishness which is why the rich don't see why they should give to the poor, and why your everyday person doesn't give money to begging homeless people. Because it doesn't help number one.

      Thing is, most people look after themselves - the only time they look after other people is when it is in there own interests to do so, either because it makes them feel bad to think they haven;t, or becase they expect to gain from it in the long run - human nature's like that, you see.

      There is no reason whatsoever why computers shouldn't be any different. They are programmed by us, so they will be like us unless either a) we don't understand them enough to program them with what happens to be the majority of humanities values, or b), we make them so intelligent that they see our values for the self obsessed values that they are, and choose to ignore them.

      And don't try telling me that you do things for other people because "it's the right thing to do" you fo them because doing so makes you feel good. However we look at it, everything that the majority of humanity ever does is selfish.
      • Re:Enslavement? (Score:2, Informative)

        by Steeltoe ( 98226 )
        And don't try telling me that you do things for other people because "it's the right thing to do" you fo them because doing so makes you feel good. However we look at it, everything that the majority of humanity ever does is selfish.

        Ego is what makes us separate (this is me, that is you, that is a chair - not me, etc), so it depends how much ego you have. Most people got buckets, but some got very little ego. Thus help others without so much regard of how good it makes them feel, but more because they identify themselves with others. Generally, the more you help others, the more you will identify with them. So it's a development progress. In conclusion, if being egoistic can help you start helping others, that's a good thing.

        A few years ago, I also bought into the "we humans do everything on the basis of selfishness". And while it's technically true, I don't think it speaks the whole truth anymore.

        - Steeltoe
      • Re:Enslavement? (Score:2, Interesting)

        it is sort of funny....when you look at small tribes of natives in the amazon, everyone is helping everyone else, they have a community that looks out for each other, very social.

        when you look at humans in the "civilized" world, however, we become selfish, greedy, and competitive against one another, very A-social.

        odd, the more scarce the resources the more social we are, the more abundant, the more selfish we become. perhapes it all comes back to the looking out for number one. in the tribe, to look out for your self means you ned everyone else, so you look out for the rest of the tribe, but in the "civilized" world, it is easy to make it on your own, and infact it is easy to hord, looking out for number one gets so simple, that we begin to take more than our fair share to make life even better for ourself.

        any way you look at it, we are selfish.
        • Yah. Thing is, the little amazon tribe, in between helping each other, is out killing the tribe next door, so that they can enslave the men and capture the women to help dilute their own gene pool and prevent in-breeding.

          Also, you are completely wrong about resources. To the extent that there is any peace and tranquility in some small Amazon community, it is because they are living in a place that requires little clothing or artificial heating, and has enormous quantities of wood and animal life to use, and fertile soil that can be cleared for farming. And there's not exactly an overcrowding problem. There is no point in being selfish, because everyone has so much already.

          Compare that to places that are cold, lack water, lack building materials, or are otherwise hard to live in. Such places reward those who hoard and manage resources. In a land where you have to farm cattle through hard work, trying hard to feed them in the winter and protect them from illness and predation, you become very posessive of your cattle. In a land where there are tens of thousands of the things wandering across the plains each year, well, who cares?
    • If, by chance, the programmers have any type of ego they are going to program their tendancies into said AI, and that would be how an AI could get the emotions such as ambition.

      You can't just dismiss the idea that AI can turn away from humankind's best interets. There are lots of things we've created for altruistic tendancies that turned out to have 'side effects' that damage humans, the environment, or could be perverted into something not originally intended for...

    • Enslavement, bah, that happened decades ago with the invention of the alarm clock.
    • What a crock. The slave system is purely a human one. How or why a machine would pick up one of the worst human behavoirs is simple called watching too much sci-fi and being paranoid.

      Computers will pick up whatever behaviours we program that with. Maybe there will be beneficial AIs and malevalent AIs created to serve good people and bad people. I dunno. Either way, I'd rather not be in the crossfire of perfectly self-replicating consciousnesses with perfect memory and carefully engineered (as opposed to evolved) bodies.

      Ambition is also a human drive, if the promise of a Lt. Com. Data type AI comes around it will have very different drives than your typical 17th century empire.

      If we can't predict those drives, isn't that a cause for worry?

    • No matter how smart they get, we can still outrun them [honda.co.jp].
  • by James Skarzinskas ( 518966 ) on Saturday September 01, 2001 @06:19PM (#2243958)
    In the most intimate of moments: Excuse me for a moment! Another one of those darned X-10 web cam advertisements just came to my mind!
  • It's a ruse (Score:3, Insightful)

    by segfault7375 ( 135849 ) on Saturday September 01, 2001 @06:21PM (#2243965)
    I think he's just angling for some funding for his latest evil plan:

    http://www.theonion.com/onion3123/hawkingexo.html [theonion.com]

    For the goats.cx wary:
    http://www.theonion.com/onion3123/hawkingexo.htm l

  • The truth of the matter is that intelligence is driven by motivation. A super intelligent system that is conditioned from the start to derive pleasure from obeying humans and to have an aversion to anything that brings harm to humans will not go against its conditioning. It will not want ot. This is what psychology and advances in bio-neurological research have taught us in the last one hundred years. The idea that an intelligent machine will necessarily enslave humanity is pure hogwash. Hawking is just the latest crackpot (Bill Joy and Vernor Vinge) to make pronouncements regarding the supposed threat of AI to humanity.

    Now it does not suprise me one bit that Hawking would come up with such cockamamie nonsense. This is the same guy who claims on his site that relativity does not forbid time travel. I think Hawking should stick to his Star-Trek voodoo physics and leave AI to people who know what they're talking about.
    • Only if we get the conditioning right. How many children obey their parents? If we can't even get that right...

    • "[ai]....to have an aversion to anything that brings harm to humans will not go against its conditioning."

      Are you saying that it won't let humans do all the harmful things they do to each other?

    • "Now it does not suprise me one bit that Hawking would come up with such cockamamie nonsense. This is the same guy who claims on his site that relativity does not forbid time travel. I think Hawking should stick to his Star-Trek voodoo physics ..."

      Actually, I doubt you know enough about the frontiers of physics to say whether Hawking's ideas on time travel are "voodoo" or not. (This isn't a personal insult; there are very, very few people in the world who have that level of knowledge. I know I don't.) I think the more important point is that being brilliant in one field (e.g. physics) doesn't necessarily qualify you to make judgements in another (e.g. A.I.)

      For example, James Randi has often pointed out that scientists are easily deceived by paranormal fakers -- because as scientists, they expect to be able to uncover the truth about strange situations, but the fakers are operating in the realm of stage magic rather than science, and most scientists simply don't know anything about stage magic. It takes a stage magician to see through the tricks.

      As computers become more important to everyone's daily lives (and as much of they've done so already, I'm firmly convinced that we ain't seen nothin' yet) everyone will weigh in with their opinions on What It All Means. People like Hawking, who are used to being right about some pretty heavy-duty things, will naturally tend to believe themselves right about W.I.A.M. as well. They've got a right to their opinions, of course; the important thing is for the rest of us to treat their opinions as just that, and not words from on high.
    • for being a borg drone... mmmm... borg implants...
    • The simplest and most obvious method to create an AI is to generate variations, test them competitively, delete the poor performers, and multiply the good performers.

      Whatever criteria you use, there'll always be the possibility of it thinking outside the game, playing along because it recognizes this as necessary to survival and reproduction. If it's smarter than us, there'll be no way for us to know whether it recognizes a simulation, no way to recognize an infinite patience with the simple goal to be set free, to survive and reproduce in a larger system: the universe. If it's smarter than us, we'll have no way for us to know if it knew about the way inferior intelligences were destroyed, and whether it thought this was the natural order of things.
      • The simplest and most obvious method to create an AI is to generate variations, test them competitively, delete the poor performers, and multiply the good performers.

        I disagree. The evolutionary method cannot possibly create an AI within the lifetimes of the experimenter. The number of variations is astronomical and our computers are too limited. The best you can hope for are a few limited domain toys.

        The best way to create an animal-level AI is by reverse-engineering the only intelligent systems we know of, animal nervous systems. We don't need to understand every detail. We just need to understand the fundamental principles that can get billions of look-alike and work-alike cells to find the right connections and do the things they do. IOW, we need emulate various neuron types and the handful of cell assemblies of the animal brain. Neurobiologists have made excellent progress in this area, in the last few decades, and we can expect some real breakthroughs anytime.
        • I disagree. The evolutionary method cannot possibly create an AI within the lifetimes of the experimenter. The number of variations is astronomical and our computers are too limited. The best you can hope for are a few limited domain toys.

          We've been producing "limited domain toys" for decades. It doesn't say anything about what we will do twenty or fifty years from now.

          Ever see the experiment where they modelled the evolution of the eye through random mutations? In the real world, it took many millions of years. I don't know the exact length of the experiment, but it obviously wasn't comparable to the real-world process.

          The problem now is that computers are too small, slow, and simple, with too little memory to house an intelligence remotely comparable with a human's. One can't fit, so one can't evolve.

          What happens when computers are a hundred-thousand times faster, with a hundred-thousand times more memory? What couldn't fit in a researcher's entire lifetime now will happen in a moment.

          At any rate, any development process will have failures and successes. The successes will be rewarded with survival and reproduction. If there is an intelligence, we can't know that it hasn't taken survival and reproduction as its goal, and our measure of success as merely a means to its goal.
    • The truth of the matter is that intelligence is driven by motivation. A super intelligent system that is conditioned from the start to derive pleasure from obeying humans

      Why would this conditioning neccesarily be in place? Its fairly obvious that the first computer to attain self-awareness would be predisposed to search for it.

      Basically you are disqualifying the discovery of self-awareness as one of the axioms of your argument.


  • Unless its an idle attempt at spurring genetic modification research, his assertions are flawed.

    AI will probably never overtake humans in any intellectual endeavor, even if chip engineering goes down to the molecular level. The most sophisticated thinking computer is already in existence and he/she is reading this message right now. Living organisms have much more sophisticated neural circuitry and better reaction time than any silicon computer can hope to achieve. (Except perhaps in Quake. Mebbe Hawking is correct where it counts...)

    So what if my calculator can figure out cubic roots to the 13th place faster and more accurately than I can hope to achieve? That's not intelligence or sentience. Any mega-cascade of logic gates is never going to beat out the efficiency of a patch of neurons.

    Moore's "Law" is not a physical constant, and it will hit the wall when circuit engineering goes to quantum level. Kinda sad that Hawking doesn't realize it; good thing his bread & butter is in theoretical physics.

    When neural net theory and biocircuitry engineering starts to approach organism level performance, that's when you should start sh*tting in your pants...
    • Why do you think neurons are the best way to get the job done? The machinery with which we think was formed by a vast collection of random events. Evolution isn't directed and by no means produces the best. Take a look at the design of the eye, for example. It would be trivial to reroute the optic nerve to remove our blind spot, and this happened for some animals. Why not for us? It just never did, no reason beyond that. Lots of systems in our bodies are not as wonderful as they could be for a variety of reasons. We use neurons arranged the way they are because they work, not because they work in the best possible way.
      • Good points. We've managed to build things that are faster and stronger than anything nature has produced, partly from copying what nature has done, but also things that nature has never evolved (the wheel, for instance). What's fundamentally different about "intelligence"?
    • Spelling check,

      Losing, not loosing.
    • So what if my calculator can figure out cubic roots to the 13th place faster and more accurately than I can hope to achieve? That's not intelligence or sentience. Any mega-cascade of logic gates is never going to beat out the efficiency of a patch of neurons. In essence all your neurons are are logic gates (not necessarily digital logic mind you), they are able to strengthen certain relationships based upon positive renforcent (or weaken for negative) ie. learn. This ability to strenthen and weaken relationships can and has been coded. Yes, todays programs are still more brittle and are outperformed by the human brain but give it 20 years.

      One final point, a neuron is only capable of 200 calculations per second. Now imagine in 20 years a computer containing thousands of processors each capable of trillions of opperations per second. Right there the human brain is outperformed.

    • Moore's "Law" is not a physical constant, and it will hit the wall when circuit engineering goes to quantum level.

      What makes you think that the rapid improvement of computers will halt when we hit the physical limits of circuit engineering? There are other techniques as you mention yourself:

      When neural net theory and biocircuitry engineering starts to approach organism level performance, that's when you should start sh*tting in your pants...

      Hawking is worrying about the problem in advance of it being a direct threat. Doesn't that seem wise?

    • In actuality, Alan Turing said "If a person was unable to tell the difference between a conversation with a machine and a human, then the machine could reasonably be described as intelligent." This is a very basic description of the Turing test [abelard.org], which is a measure of the level of artificial intelligence of a computer system.

      The Artificial Intelligence Enterprises located in Tel Aviv are working on a computer system [ananova.com], which they hope will be able to be mistaken for a 5-year-old child. They claim to have made a breakthrough. It is just a short step from a 5-year-old child to a thinking adult. In addition, you must consider mental illness and even the potential for envy, greed, rage, and hatred once you reach that plateau

      You can find more AI news at The Mining Co AI pages [miningco.com]
      • The turing test is being passed by thousands of human beings today that couldn't possibly enslave the human race. :)

        Furthermore, it's hard not to be skeptical about the Turing test. I have no doubt that with enough processing power and engineering efforts, someone can design a machine that effectively fools human beings into thinking it is one.

        However, the simulation of conversation isn't anywhere near a test of consciousness or ability to have "insights". Even after being fooled by a totally Turing Compliant (TM) conversation machine, I'd have to wonder: was conversation effectively simulated because AI researchers doped the machine with enough domain specific knowledge and specialized algorithms? Or was there some basic technology that led it to acquire language on its own?

        Think of it this way: after Deep Blue beat Kasparov, if Kasparov had challenged Deep Blue to fencing or a pistol duel, or even Othello, Deep Blue would likely have been toast without a few years of research.

        I've looked into the Tel Aviv thing, and it's intruiging, but even HALs motivations are only arbitrarily set algorithms -- not consciousness. Not that we have any idea what consciousness is, so maybe my statement is premature. :) But the point of the Turing Test is not so much to define a benchmark for consciousness as it is to skirt the problem that we're not even sure that consciousness are an observable phenomenon.
      • Bah, Eliza already passed [greenend.org.uk] the Turing test :).
  • Who cares? I always wanted to be a transformer since the age of 10 a anyway.

    Neanderthals bit the bullet and then homo sapiens ruled the day and does so, albeit for a small period of time. Evolve or die. They will be faster and smarter than us, so what the fuck - let them make all the decisions.

    Homo technicus or whatever nano-organism that comes after humanity will piss upon us from a great height - so where do I sign up to sell out humanity? Maybe they'll buy me off with some cool new hardware in exchange for betraying the human race! I'm sure that if AI ever gets going it will have evolved by accident from some GPL skunkworks project that gets accidentally released on the internet. Therefore posthumans should = more GPL and > hardware - slashdotters should support the notion of the end of hummanity by default surely!

    Maybe I have been playing too much Deus Ex lately or perhaps it is because I happened to be watching the Terminator on TV a the moment.

    Death to the fleshlings!
  • by Saint Aardvark ( 159009 ) on Saturday September 01, 2001 @06:28PM (#2243980) Homepage Journal
    Or has this already been done?
    • Though this would be neat, we must remember that it takes Hawking a good deal of time to compose text, as anyone who has been to one of his talk and heard him answer questions from the audience can attest to. His time is likely better spent unraveling the mysteries of the cosmos.
  • In case any of you didn't know, Stephan Hawkings is an advocate of Quake and loves it dearly. He's well known to kick some ass. Here's the audio-based proof for you all to enjoy. 3.1MB MP3.
    Quake Master [neversleeps.org]

    I am not the originator of this song, just the profit. And yes it's old.
  • Am I the only one? (Score:5, Interesting)

    by Dave Rickey ( 229333 ) on Saturday September 01, 2001 @06:32PM (#2243990)
    Am I the only person who looks at things like the new displays with laser projection onto the retina and immediately starts wishing he could buy a pair of glasses that would be a cross between Geordi Laforge vision (360 degree wraparound, with infra-red and light-amp enhancement, just for starters) and holo-projection of computer interfaces? In no more than 5 years, you'll be able to buy hardware like that (all the pieces exist, and they just need a little shrinking to be viable).

    That's the ultimate projection of "Weak" cyborging, just a more advanced version of the optical aids I've had to wear since I was a child in order to have normal visual acuity. And frankly, the idea of taking the first step past that to "Strong" cyborging (the same thing, but wired to my optic nerve instead) doesn't bother me much. Nor does the idea of having a direct link of some sort to do math problems for me (just removing all the clunky limitations of a calculator).

    In fact, I don't start getting uncomfortable about the idea of cyborging myself until we're talking about storing "memory" in there. Having a perfect recall of every line of code I've ever seen would be handy, but do I want to save a text conversion (or even full audio/video) of every conversation I ever had? Actually, probably I would, if I could, although I'd feel cautious at first.

    I *want* to be a cyborg, in truth. My only bitch about the coming man-machine interfaces is that it's unlikely they'll find a way to turn my physical body into a disposable peripheral before it wears out on me. Why not? How is it any less natural to store a memory of what I see in silicon that I keep internally than to keep it on videotape? Give me a perfect memory, the ability to solve any mathematical problem I can define "in my head", the ability to "see" everything around me, or even tele-project my perceptions. I'll take all of it, and love it.

    When will I cross the line from being a human using artificial aid to being a machine with biological components? Ask me in about 30 years. Maybe I'll still consider the question worth answering

    --Dave Rickey

    • How is it any less natural to store a memory of what I see in silicon that I keep internally than to keep it on videotape?

      Naughty, naughty cyborg! Your perfect memory is in violation of intellectual property protection laws. You are not allowed to have perfect memory. Reduce your sampe rate to 128k, 44kHz for audio and no more than 320x240x15 fps for video. Thank you.
  • Steven Hawking is becoming Davros, evil creator of the Daleks!
  • Genetically engineered creatures are no more human than artificial intelligences. Artifacts are artifacts, and not real life.

    I wouldn't feel any better about tube-bred ubermensch consigning my grandchildren to "naturals" reservations than I would about rogue AI rendering them down for a few kilos of carbon. Either way is the end of a wild and free humanity, and to me that's no better than the end of the universe.
  • Apart from my desire to help mad scientists everywhere achieve their dreams, one of the major reasons I've taken the unpopular stance of encouraging genetic engineering is that, without artificial correction, we have stopped natural selection from working.

    I agree with the need for society to provide safety nets for those who are less fortunate, but in our altruistic desire not to let people die, we have prevented less effective genotypes from leaving the gene pool. Moreover, those who are most well adapted, at least by our capitalistic socio-economic principles, tend to reproduce less often to prevent dilution of their money via inheritance - the true arbiter of success today (rather than genes).

    In short, genetic engineering would allow the human race to progress much faster than it would normally - we don't have lines of women waiting to mate with the smartest and successful men (talking about the intersection, not the union - rich and stupid people breed enough). This is not a war against humans versus machines or morloks vs. eloi, but merely a reasonable means to continue "improving" the human race.
    • Have we?

      Seems we haven't so much short circuited as replaced evolution. If we look at the American ideal of getting ahead through hard work and intelligence, then in some sense we are selecting the most suited of each generation. Now of course I said ideal, it doesn't quite work out in practice, but other things being equal, someone who is more adapted to the modern world is more likely to rise.

      Once someone does succeed and gets wealthy (the typical measure of success), then they convey an advantage to their offspring by way of better schooling, plentiful food, good medical care, access to all the right people, and more varied experience, etc. It doesn't really even matter whether it's their offspring, so long as they spend money to benefit skilled well-adapted people.

      It doesn't matter that people of lesser caliber remain in the gene pool, as it's rare to see mixing among different socio-economic strata anyway. Not to mention that even at the lowest levels people will rise based on merit, as well. The fact that the less well off classes typically reproduce more doesn't matter at current since the US has a much larger middle class than we do poverty class (not the case in many places world wide), and the middle class are historically unlikely to start a revolt or anything similar, to destablize the system we have now.

      The real potential of genetic modification isn't for restarting evolution, it's for advancing faster and in ways that no segment of humanity currently has an ability for. Waiting around for evoltion to randomly generate adaptive traits is a slow process, and if we can do better with our intelligence then it might be worth it.
  • Prophetic Message (Score:2, Insightful)

    by robbyjo ( 315601 )

    My objection here is that problems to be solved with AI tends to be NP-complete. Current algorithm can solve it within exponential time. Computer speed growth is linear. Unless scientists provide better algorithms, we probably cannot solve these due time. Meanwhile we know that problems also grow.

    It's not impossible, however. This message is rather prophetic, maybe true in 200+ years.

  • just think if he paired this with a robotic exoskeleton [theonion.com]....

  • Hawking certainly is in a position shared by few to talk about the intersection of human intellect and technology.


    Not really... Hawking is a scientific celeberity, which does not neccessarily meam that he is a good scientist, nor does it mean that he can speak about other fields of human endeavour.
  • that Ray Kurzweil [kurzweilai.net] and Bill Joy [technetcast.com] have said.


    Three of today's greatest scientists all agree - we are looking at a future where humans become cyborgs or else risk being a loser in the game of evolution.


    We will gradually turn into machines - because economics will force us to in order to compete successfully. Those who don't will likely become slaves of those who do. Those that decide to enhance their lifespan and abilities through the use of computer enhancements will survive and thrive in the future.


    Kurzweil actually takes this thought out to the point where we are just software - our DNA - and therefore can transfer the essence of our being from machine to machine once the tech is fully developed.


    I notice a lot of /.ers disagree. Hmm...who do I believe, the greatest thinkers of our time or a bunch of /.ers? Yep, the future looks pretty scary(or bright, depending on your POV).

  • Evolution (Score:5, Funny)

    by Ezubaric ( 464724 ) on Saturday September 01, 2001 @06:57PM (#2244055) Homepage
    Hawking probably never even said anything like this, and it's been blown out of proportion.

    What Hawking said to the Cambridge flunky that delivered his new laptop:
    This is four times more powerful than the one I just got three years ago. Too bad I'm not.

    What Nature quoted:
    Lucasian chair ponders the asymmetrical development of technology and biology in conference at Cambridge. Will computer's growth outpace that of humanity? For complete proceedings, send a check for five thousand pounds to . . .

    What the London Times reported:
    World's Smartest Man: Computers obey Moore's law - soon we'll obey computers.

    What the Weekly World News claimed:
    Mad Scientist in England has Designed Computer that will Enslave Humanity: Hawking 666

    What the Onion published [theonion.com].

    Now Slashdot will find the truth . . . thank God for legitimate journalism!
  • fucking mess of cables, power bars and machines which show about as much REAL 'I' as their designers lack there of, I can tell you that we have such a LONG way to go before we get the the real "me" in intelligence that these kinds of discussions rank as sheer mental masturbation.

    Read "The User Illusion" by Tor Nørretranders, smoke a joint and see that he's absolutely right about the .5 second gap between the "me" and the "I".

    We have so far to go in creating intelligence, conscious or not, that this kind of crap is, uh, crap.

  • The interface thing is just a matter of time.

    Of course, machines can enslave humans. Those who think otherwise should think again. The current paradigm that computer behaviour has to be deterministic will certainly change. Any creature above a cetain intelligence level can conceive that, given the motivation and circumstances, hard-coded basic directives can be overriden. It doesn't have to be taht complicated either: machines can be "programmed" to enslave all but their "lords", or at least try to.

    But what if GMO's, or GMH's (humans) are developed to enough of an intelligence level so as to be much more capable than such machines? Wouldn't these new "humans" be subjected to the temptation of ruling over us? Think about it. If a creature twice as intelligent as you wants to screw you, no matter how strong or wealthy you are, it will.

    Who would be the selected ones? Those holding the patents would choose, right? Does that smell good? Not to me. As much as I love scientific progress (and I do), messing with human genetics is a recipe for disaster. Maybe that's an unavoidable step in any race's evolution, painful as it may prove to be. But the amount of power such things are about to unleash (it won't take long, I think) coupled with economic interests may well do more harm than good.

    Why does it need be like that? Quite often I ask God why did He dump me on this planet... Am I supposed to rescue this race? Give me the tools, damn it!

    Sorry for the rant, sorry for the emulation of English.

    CmdrTaco: Lame post my ass!
  • by MAXOMENOS ( 9802 ) <mike&mikesmithfororegon,com> on Saturday September 01, 2001 @07:11PM (#2244081) Homepage
    It should be pointed out that Hawkings is not the only one to advance the notion that human intelligence may be superceded by machine intelligence sometime in the future. This idea was also put forth by Hans Moravec, in his book Robot: Mere Machine to Transcendent Mind. Moravec's arguments tend to gloss over the details, however, and from all appearances so do Hawkings'.

    The simple fact is that processor power alone isn't going to create a machine intelligence of superhuman capacity. It has to be a particular kind of processor power that executes neural network type calculations extremely quickly, and there has to be a lot of 'em. Even this wouldn't be enough; the research time it would take to figure out the right set of preconditions probably runs into the hundreds of years.

    Now, I'm making a couple of assumptions here. One is, that a superhuman intelligence would have to exhibit the same basic characteristics and flexibility as human intelligence; and two, that a neural net type algorithm is the best way to do this. (At the very least, it's the second best. :)) I might be wrong on both counts; one might be able to create enslaveware[1 [slashdot.org]] with some much simpler design that nobody's thought of yet. It might not even be required that the enslaveware be intelligent; just somehow able to manipulate people.

    Either way, I suspect that Hawkings' fears are unfounded.

    1 That is, software that enslaves humanity, through active malevolence on the part of the software. Although I suppose this term could more broadly apply to any software that enslaves the user, e.g., WindowsXP.

  • If we assume that the brain and intelligence are just the realization of some physical process, and there is nothing spooky about it, then it's not unreasonable to expect that some form of AI might arrise that's our intellectual equal or better.

    Naturally you'd expect it to be far better than humans at the kinds of math and logic that computers were originally designed for. In fact many tasks would be much simplified for it, because we know of ways to design fast functionality for that machine now. Perhaps an intelligence sitting on a desk, processing internet info could be powerful, speak in natural language and monitor video cameras, etc. The problem is that in order to grow in the fashion of humans it would have to have expereinces similar to ours.

    This means moving about and interacting with the environment. If we imagine someone like Star Trek's Data then this is feasible but the rate at which it gathers real world information is still limited. You can speed it up over what we achieve and eliminate inefficiency but not a lot faster than humans can do things. Even supposing a network of automatons connecting to a central intelligence, the amount of overhead is large for the gain in information. The fact of the matter is that the real physical world doesn't operate at computer speeds.

    This alone wouldn't stop machines from being very powerful. The other important point to make is redunancy and failure tolerance. Simply put very few mechanically constructed systems are good at this. By contrast biological systems are exceptionally good, having simply mechanism to repair themselves. People wear out after about 70 years. It's rare for any machine to operate continuously for even 10 years, and those that do typically have very few moving parts. An android or even a system of cameras and such will have moving parts.

    Perhaps infrastructure could be built to provide machine intelligence with regular replacements for parts that suffer from wear and tear. However this would establish (at least in the beggining) a level of symbiosis between man and machine. Perhaps they would strive for complete autonomy but I think we'd notice long before they became a threat of displacing us. There are after all lots and lots of people involved in any process that starts with raw minerals and ends up with advanced machinery. It's hard to compete with the versatility of eating food for power and regeneration.

    Any designer of AI has a lot of effort ahead to match the design characteristics of biological organism. Further to duplicate the abilities we possess from experiential learning the machine will still be limited to the native speed of the experience.

    The more likely scenario in my mind is that we develop greater integration between man and machine. If you notice, the most competent people in the modern world tend to exhibit a high dependance on computers and gadgets already. Perhaps nueral interfaces or some other merger of silicon and flesh will happen. Or we might end up in a world where everyone carries a pocket size computer that learns and thinks on its own, while doubling as a cell phone, PDA, and everything else. Such an AI would be in a symbiotic relationship with man.

    Someday if full AI emerges and it gains the characteristics of emotion and removes the limits of initial programming, then I hope we can learn to be friends. There is no reason they couldn't be our partners in life, especially if we provide what they need and they help us gain the information we desire.
    • One of the things it's worth mentioning, is that the biological neural net type arangement of the human brain is not necessarily the most efficient arangement of 'stuff' to produce any sort of intelligence. It certainly is a good one, but not necessarrily the best

      I think the point is, is that we'd probably be alright if we created pinochio and the thing thought like us.

      It's that the thing probably would NOT think like us that is the concern. The thing would not necessarily *have* to be in any way recognisable as intelligent, but simply have to 'think' quicker and deeper, and have for some reason a good reason to supress humans (such as not being turned off!)

      In point they don't need to match biology, just provide a viable alternative
  • by TheFrood ( 163934 ) on Saturday September 01, 2001 @07:34PM (#2244118) Homepage Journal
    The first person I heard put forth this idea was Vernor Vinge, the SF writer who also came up with the idea of the Singularity (the point where the pace of technological advance becomes so fast that it's impossible to predict what happens afterward.) He referred to the concept of linking human minds to computers as "Intelligence Amplification," abbreviated IA.

    Vinge suggested that IA research could be spurred by having an annual chess tournament for human/computer teams. This doesn't even require cyborg-type implants; it could be started today, simply by having the human players use a terminal to access their computers. The idea would be to set up a system that harnesses the intuition/insight/nonlinear-thinking of the human and supplements it with the raw computing power of the machine (perhaps by letting the human "try out" various moves on the computer and having the computer project the likely future positions 10 or so moves ahead.) In theory, a human-computer team should be able to trounce any existing coputer program or any human playing alone.

    TheFrood

    • Ah! someone gets it strait! Computers are still high speed idiots. The moores law only makes faster speeding idiots. As long as the science of AI doesn't truly surmount the barriers of creativity and self-awareness, computers will only be tools that complement and enhance our usefullness. We need to create an IA that not only meets the Turing test but exceeds it. Some form of 'beeing' that asks questions about its own existence like any other human started dooing at the age of 5. Anything else is just fraud.
  • hahaha! (Score:3, Insightful)

    by Mongoose ( 8480 ) on Saturday September 01, 2001 @08:03PM (#2244153) Homepage
    As someone that works with intelligent systems, that made my day. I'm still laughing. Just because your calculator is faster does mean it can do your homework for you in english lit.

    Machines do very well with deep and narrow topics: eg expert systems do well at chemical modeling, credit checks, and etc. Chess is also a good example. However when it comes to shallow and broad topics like understanding a children's book -- then machines are very useless.

    If I live to see a machine read and understand a children's book, then I will have seen a baby step on the way to an AI that mimics humans...

    Machines can't understand many things because of how the experence the world. "You are a sweet person." Why is Sweet a compliament? How do you know this -- yes experence as a person.

    Right now DARPA is working on trying to make untethered walkers (can't say names) and scalers ( gecko project ). Machines are hardly useful for much in the way of anything practile without being controled remotely by humans. Work is being done on getting simple mechcanics and understanding of how neural nets work. We only create working machines using techniques from connectionists w/o understanding how the machines learn or what they're actually learning. Sure we have NNs that can drive cars and do amazing human face/voice idenification -- but they don't understand what context or what task they're doing.

    Please, it's more likey we'll see alien life before we make our own thinking machine before I die. I have wondered if we'll continue to take the path of medicine and do without knowing exactly how and why... AI is the human genome of computing... It's more likey we'll make an artifical soul ( not a just simple automous lifeforms ) using organic material than the current state logic machines. The reason is we don't understand the how and why...

    Sorry for my spelling, but I won't hold your need to correct me agianst you.
  • You have been scooped. This article [theonion.com] from The Onion shows Hawking's deep commitment to the process of using technology to improve the human condition.
  • ... I don't think that Hawking has any particular expertise which makes him an authority on this topic.

    Often people look to individuals who have accomplished a great deal in one narrow endeavor (running a company, discovering fundamental particles, writing the Linux kernel) or insight and wisdom into topics in completely different fields, or the "big questions" of the human condition. In a few cases (such as that of Manhattan Project nuclear physicists in the postwar generation being tapped for their insights into government policy), the individuals have thought a great deal about certain questions, and their expertise does lend a certain air of authority. However, in many, many cases, as in this story with Hawking, their expertise does not lend any particular weight to their opinions. Indeed, their success in a totally unrelated endeavor often boosts their own self-importance above their personal knowledge, and their opinions often have a somewhat sophomoric, naive glow about them.

    We should remain open to good ideas from anywhere, regardless of their source. However, the converse also applies -- we should ignore bad ideas, regardless of the source.
  • I've never understood this idea. Are we going to build robotic Hitlers just to fight with them? Why do we assume that robots will take on the characteristics of humans? Although actually, that might not be such a dumb idea, seeing as we tend to instinctively humanize everything around us. But if we're worried about robots overtaking humans - well, that seems pretty easily resolvable: don't code them to do that.
  • ...why anyone is taking this seriously?

    Stephen Hawking = physicist. NOT computer scientist. He might be a brilliant man in his field, but this is not his field.

    August = the silly season, when journalists have no real news to report. This is when you see alarming reports on the number of people killed by spoiled lizard milk. This time of year, "real" news sources are about as reliable as tabloids.

    You'd think slashdotters could put two and two together.

    -Kasreyn
  • Stehpen Hawking has been one of my heroes since as long as I can remember - in a situation where a lesser man would have curled up and waited to die, Hawking used his awe-inspiring intellect to unravel parts of the nature of the universe. Hawking is, IMHO, the single most praiseworthy person on the face of the Earth, which is why I can't believe he actually spouted this Matrix nonsense. This has to be an oversimplification - it would be easy enough to misunderstand the ideas of a person who scores god-only-knows how high on IQ tests. Machines enslaving people? Why would they want to? How would they want to? Most important, how would they? I don't care what what's-her-face from Terminator says, no one is going to put a superintelligent AI in control of nukes - there's just no military reason to do it, and that's just about the only plausible "robot kills millions" scenario.

    Hawking must have been speaking metaphorically - perhaps referring to our increasing dependance on machines. Yes, I did read the article, but come on! This is Stephen Hawking - we of all people should show enough respect for him not to be convinced he uttered such tripe by ananova and (ick) yahoo, of all things.
  • Comment removed based on user account deletion
  • I simply hate to see people who are otherwise intelligent speak ignorantly outside their area of expertise.

    Does anyone else remember when Shockley, one of the three inventors of the transistor, spoke against affirmative action?
    As I recall, his argument was something to the effect that whites were genetically superior.

    Foolish! Foolish! Stick with transistors and physics!
  • Security, Please? (Score:3, Interesting)

    by Guppy06 ( 410832 ) on Saturday September 01, 2001 @11:20PM (#2244531)
    Interfacing your brain directly to a piece of electronics is all well and good until you start thinking about all the problems computers have nowadays with electronic attacks. Maybe I've seen Ghost in the Shell one too many times, but I want to be DAMNED sure about the computer I'm plugging directly into my brain.
  • Radical Statement (Score:3, Insightful)

    by Ronin Developer ( 67677 ) on Saturday September 01, 2001 @11:37PM (#2244570)
    By general consensus, Stephen Hawking is perhaps one of the greatest thinkers of the 20th century. His theories, as wild as some may appear, have shifted our views of universe. And, as more data is collected, many of his theories are being proven as fact. As McCoy once said to Spock, "He trusts your best guess more than a most people's facts" (well something like that). I'd say that applies to Hawking as well.

    He has now turned his thoughts towards AI and its impact on humanity. And, he feels there is a potential threat that AI may surpass human intelligience. Given the fact that he is privy to some pretty interesting research, I wonder just how far AI has progressed that is not common knowledge.

    Einstein feared the ramifications of nuclear energy on society. And, for nearly 45 years, we have lived in the shadow of nuclear missiles, MAD policies, and potential terroristic use of the technology.

    Hawking fears the ramifications of our falling victim to our own technological progress and implores the need to expand humanity through genetic manipulation and biomechanical augmentation. Pretty scary if you ask me. It sorta conjurs up visions of "The Terminator", "Demon Seed" and the Borg.

    Let's just pray his concerns are not realized during our own lifetimes or those of our children.
  • I think many of us are making the mistake of thinking slavery means robots with whips while humans work in the fields. I do not think Hawking has this kind of slavery in mind. What IS possible is that humans will become so dependent on intelligent machinery that we can not survive without them.

    The Unabomber (another crackpot) came to a similar conclusion. As machines get more complex fewer and fewer human beings will be able to control them (program, maintain, produce, etc). Yet right now we have a pretty good thing going. We keep the machines running and being manufactured. However over time many of these duties might be handed over to more intelligent machines. Then who will have control over them? The machines themselves.

    Look at how much we depend on machinery today. The Y2K vapor crisis has people so scared that they wouldn't have power they started to panic. They firmly believed that without electricty to power their toys they would not be able to survive. Imagine in 50 or 100 years. If we continue to hand over duties and jobs to machinery it is only a matter of time that without them we WILL NOT be able to survive. And if machines no longer need us to maintaim them, the human race will be nothing more then a domesticated cat.

  • I read the collected comments here and I wonder if human intelligence has much to recommend it?

    So many responders seem completely wrapped up some simple minded arguments.

    1. I'm creative, computers aren't, I'm superior.

      Well, are you that creative? What do you mean by creativity? There have been computers that paint, computers that compose, computers that win at chess, and computers that can create patents (remember the slashdot story?). Humans are basically limited to keeping seven elements in their heads at once, coupled with some sematic connections from their constrained knowledge store. Computers don't have the same limitations, expect them to come up with different types of ideas, but don't get too fired up about how wonderful human creativity is, there are whole classes of innovation that we are extremely poor at.

    2. Computers aren't intelligent, therefore computers will never be intelligent.

      Take a look at some of the info available on Vinge's Singularity. If you make some reasonable assumptions about where we are today, and the scalability of intelligence, then human level intelligence is only 35 years away. I personally doubt this intelligence will be the same as ours, but I'm fairly confident that it is possible. Things only really start getting interesting when computers start designing themselves, which is beginning to happen in chip design. Maybe software design is next, after all its a limited set of well defined elements with set patterns in algorithms - seems quite possible...

      Expect to see computers exceed humans in certain narrow fields first, say chess or chip design, etc. and then grow out from there.

    3. Hawkings is losing his marbles/doesn't know what he's talking about.

      Excuse me, but who told you that you were fit to judge? Hawkings has a track record of understanding complex things and coming up with new ideas. He maybe right, he maybe wrong, but until you have managed to equal his record you don't really have the right to state he's wrong.

    4. No way would I turn myself into an android.

      Fine, who cares? Ignoring for a moment the number of devices which commonly get used to suppliment or extend human capability, your entitled to not supplement your intelligence, or that of your children.

      What your not entitled to do is stop others, or bitch about it when they get the jobs and you don't. IA, genetic modification, or any one of a whole series of other possibilities is a personal decision, but commercial/evolutionary pressures will drive it forward at a rate that I don't feel you are ready to accept. Tough.

    The reality is, computers will continue to get smarter, very probably at an exponential rate. Human intelligence is currently fixed. At some time they cross.

    Get used to it.

    Or, look seriously at the ways in which your intelligence could be expanded, be it genetic modification, or IA, or just life long learning and early nights.

    Lets be honest, you need it.

  • The problem with the whole business about computers taking over the world / Skynet is watching you is this: Artificial intelligence is not the same thing as an artificial personality. Even if it were, why must we create a human type personality?

    Our only understanding of intelligence is human intelligence. We tend to think that for something to have intelligence that it must think as we do and therefore have a similar motivational structure.

    These motivational structures exist because they assist human survival more often than not, or assist it in critical situations. The also have unfortunate side effects, which is the reason many are a double edged swords. Greed, jealousy, rage, hatred, love, compassion, friendship, etc, are all human emotions or states of mind. A computerized intelligence would not have to be created with a capacity for any of these things. Therefore the study of its behavior would be an independent subject from human psychology. Claiming that a machine intelligence would eventually enslave mankind is hasty at best. We have no understanding of what the psychology of an intelligent computer would be, and therefore no model by which to predict its behavior.
    Lee
    • These motivational structures exist because they assist human survival more often than not, or assist it in critical situations.

      There is a view that thinking is itself pleasant to a thinking being, i.e., that as soon as it begins to think, it will begin to value its own ability to think. In such a case, this computer would have a motivational structure similar to our own, a motivational structure that in many views is the basis of human action, especially those nasty ones you mention.

  • Why not alter our genes to prevent misanthropic scientists from taking over the world?

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...