Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

Why The Future Doesn't Need Us 408

Concealed writes "There is an article in the new Wired which talks about the future of nanotechnology and 'intelligent machines.' Bill Joy, (also the creator of the Unix text editor vi) who wrote the article, expresses his views on the neccesity of the human race in the near future. " From what I can gather this is the article that the Bill Joy on Extinction story was drawn from. Bill is a smart guy -- and this is well worth reading.
This discussion has been archived. No new comments can be posted.

Why The Future Doesn't Need Us

Comments Filter:
  • by Anonymous Coward
    Ol' Isaac came up against this idea years ago. Basically put, we fear our own technology. as advances in AI, and robotics continue, we start questioning what inteligence acually is, and as we (eventually) create machines that are smarter than us. But because we fear this, we will undoubtably pre-determine safeguards, so that we don't create our own "Frankenstein's Monster". (the man-made creation that destroys us) We will always ensure that these robots (nano-techs, whatever) are either extremely limited in the scope of their abilities (i.e. nano-devices that can only "survive, inside nuclear reactor cores) or ensure that they've been programed to to obey humans (IA's three laws of robotics). We're making the damn things. There is no way in hell we're NOT going to build in safe-guards against our own demise. We're to arrogent to allow our creations to actually believe they're better than us. (Go ask your Dad if your better than him, or just ask youself that question...)

    of course, in the Asimov "Robots" future, we freak out and destroy the robots anyways, because WE know they're superior, despight they're programmed belief otherwise...
  • by Anonymous Coward
    here's a good read. Kurzweil and Joy were merely repeating what Moravec stated long ago... http://www1.fatbrain.com/asp/bookinfo/bookinfo.asp ?theisbn=0195116305 Machines will attain human levels of intelligence by the year 2040, predicts robotics expert Hans Moravec. And by 2050, they will have far surpassed us. In this mind-bending new book, Hans Moravec takes the reader on a roller coaster ride packed with such startling predictions. He tells us, for instance, that in the not-too-distant future, an army of robots will displace workers, causing massive, unprecedented unemployment. But then, says Moravec, a period of very comfortable existence will follow, as humans benefit from a fully automated economy. And eventually, as machines evolve far beyond humanity, robots will supplant us. But if Moravec predicts the end of the domination by human beings, his is not a bleak vision. Far from railing against a future in which machines rule the world, Moravec embraces it, taking the startling view that intelligent robots will actually be our evolutionary heirs. "Intelligent machines, which will grow from us, learn our skills, and share our goals and values, can be viewed as children of our minds." And since they are our children, we will want them to outdistance us. In fact, in a bid for immortality, many of our descendants will choose to transform into "ex humans," as they upload themselves into advanced computers. We will become our children and live forever. In his provocative new book, the highly anticipated follow-up to his bestselling volume Mind Children, Moravec charts the trajectory of robotics in breathtaking detail. A must read for artificial intelligence, technology, and computer enthusiasts, Moravec's freewheeling but informed speculations present a future far different than we ever dared imagine.
  • Oh well. Take 2.

    I read the the Joy interview with increasing surprise at how each of my
    responses had been anticipated. He had read the same books, (in fact,
    talked with some of the authors in person), had the same interests, and
    used as examples scenarios familiar from Science Fiction (The White
    Plague, various utopias [the book I lent you being a good example], and
    Asimov's 3 laws of Robotics).

    To summarize poorly a very long and in-depth look at the problem, it
    appears the situation we are facing is this:
    A) Humanity, in whole or in part, will become wholly dependant on the
    machines (the Unabomber's fear).
    B) Humanity will be crowded out by the superior robotic species, either
    deliberately, or through inevitable struggle for resources.
    C) Humanity will lose some vital essence that makes us human as we modify
    ourselves to be something more and more robotic. (The Age of Spiritual
    Machines scenario)
    D) We will lose control of our new toys, and wipe out the earth in a
    biological or mechanical plague.

    There is little that can be said to A. It can only be hoped that the
    decision to increase our dependance upon our technology to such an extent
    would not be the choice of all (I personally would feel it to be an
    infringement on my Will - free or no) and that those who did would have no
    reason to harm those who did not, since after all, the machines would be
    providing them with all they needed, they would hardly need to enslave or
    eliminate those who chose to do things themselves. If the results of such
    a Utopia were to be negative, we would soon see it, and hopefully not all
    of us would fall to its trap.

    B is a little more difficult to argue, but there is one small flaw.
    The competition for resources is assumed to be in the same ecology.
    We do not, at the present, compete for resources to a significant extent
    with, say, giant squid. Yet a giant squid has far more in common with us
    then a species which would in all probability reproduce by direct mining
    of ores beneath the earth, or on the moon, or asteroids, or other planets,
    and use as energy the abundant heat far below the earth, or the far more
    plentiful radiation outside the earth's atmosphere. We might stay on
    earth, plodding along our evolutionary route, while the robotic species
    rapidly evolved beyond our comprehension in the far reaches of space.

    C is difficult to argue with. For change has been, and will continue to
    occur, and most likely at an ever accelerating rate. What is it that
    defines humanity anyway? At what point do we cross an invisible line
    beyond which we are no longer human? There was an interesting quote I
    read - something along these lines:
    "Homo sapiens is the missing link between ape and human."
    Of course, one thinks immediately of all the intermediaries that have been
    discovered, but why stop with those? Why are we the culmination of
    evolution? True, we have an innate desire for our own survival, but is
    that any reason to fear change to our species (BTW, on these lines, are
    you going to see the new X-Men movie this summer?)?
    What is it that makes us human? Is it our thoughts, our emotions, our
    DNA?
    What is being human that it should be guarded so carefully?

    In my opinion, so long as our legacy is sentience, which strives to
    understand and embrace as much as possible of the universe, it matters
    little what its form is.
    To me, while I care a little for C.S. Lewis' "little law", the Law of the
    Seed, I think it does not matter to any great extent what form we or our
    descendants take (or even that they be ours!) I care that what we have
    learned of the universe not be forgotten, that our legacy of knowledge
    continues, but that is a different hting entirely.

    It seems to me that the only option left to avoid is D. This is nothing
    new. Each increase in knowledge has increased the potential for smaller
    and smaller groups to harm larger and larger populations. The development
    of atomic weapons was sucessfully navigated (so far) without the
    destruction of our world, it is possible we will do the same in the
    future - self-replicating nanite guardians with high levels of redundancy
    in their instructions to reduce mutation to safe levels, more effective
    immune systems to protect against biological plagues and so on.
    Certainly I agree with many others that the best course is to spread out
    humanity over as many different planets and environments as possible - to
    stop putting all our eggs in one basket (I believe that phrase was used
    by a certain famous astronomer concerning the chances of an asteroid
    impact?).

    In essence, while I understand the depth of Joy's study of this problem,
    and the fears he feels, I have a greater optimism in our resiliency, and a
    greater willingness to accept changes to us, then he does.
    I feel that things will be changing very rapidly, and that we, or our
    children will live in a world incomprehensible to us right now.
    I only hope I will live long enough to see it.
    Change is good - it keeps us from getting bored.
  • I read the the Joy interview with increasing surprise at how each of my responses had been anticipated. He had read the same books, (in fact, talked with some of the authors in person), had the same interests, and used as examples scenarios familiar from Science Fiction (The White Plague, various utopias [the book I lent you being a good example], and Asimov's 3 laws of Robotics). To summarize poorly a very long and in-depth look at the problem, it appears the situation we are facing is this: A) Humanity, in whole or in part, will become wholly dependant on the machines (the Unabomber's fear). B) Humanity will be crowded out by the superior robotic species, either deliberately, or through inevitable struggle for resources. C) Humanity will lose some vital essence that makes us human as we modify ourselves to be something more and more robotic. (The Age of Spiritual Machines scenario) D) We will lose control of our new toys, and wipe out the earth in a biological or mechanical plaguThere is little that can be said to A. It can only be hoped that the decision to increase our dependance upon our technology to such an extent would not be the choice of all (I personally would feel it to be an infringement on my Will - free or no) and that those who did would have no reason to harm those who did not, since after all, the machines would be providing them with all they needed, they would hardly need to enslave or eliminate those who chose to do things themselves. If the results of such a Utopia were to be negative, we would soon see it, and hopefully not all of us would fall to its trap. B is a little more difficult to argue, but there is one small flaw. The competition for resources is assumed to be in the same ecology. We do not, at the present, compete for resources to a significant extent with, say, giant squid. Yet a giant squid has far more in common with us then a species which would in all probability reproduce by direct mining of ores beneath the earth, or on the moon, or asteroids, or other planets, and use as energy the abundant heat far below the earth, or the far more plentiful radiation outside the earth's atmosphere. We might stay on earth, plodding along our evolutionary route, while the robotic species rapidly evolved beyond our comprehension in the far reaches of space. C is difficult to argue with. For change has been, and will continue to occur, and most likely at an ever accelerating rate. What is it that defines humanity anyway? At what point do we cross an invisible line beyond which we are no longer human? There was an interesting quote I read - something along these lines: "Homo sapiens is the missing link between ape and human." Of course, one thinks immediately of all the intermediaries that have been discovered, but why stop with those? Why are we the culmination of evolution? True, we have an innate desire for our own survival, but is that any reason to fear change to our species (BTW, on these lines, are you going to see the new X-Men movie this summer?)? What is it that makes us human? Is it our thoughts, our emotions, our DNA? What is being human that it should be guarded so carefully? In my opinion, so long as our legacy is sentience, which strives to understand and embrace as much as possible of the universe, it matters little what its form is. To me, while I care a little for C.S. Lewis' "little law", the Law of the Seed, I think it does not matter to any great extent what form we or our descendants take (or even that they be ours!) I care that what we have learned of the universe not be forgotten, that our legacy of knowledge continues, but that is a different thing entirely. It seems to me that the only option left to avoid is D. This is nothing new. Each increase in knowledge has increased the potential for smaller and smaller groups to harm larger and larger populations. The development of atomic weapons was sucessfully navigated (so far) without the destruction of our world, it is possible we will do the same in the future - self-replicating nanite guardians with high levels of redundancy in their instructions to reduce mutation to safe levels, more effective immune systems to protect against biological plagues and so on. Certainly I agree with many others that the best course is to spread out humanity over as many different planets and environments as possible - to stop putting all our eggs in one basket (I believe that phrase was used by a certain famous astronomer concerning the chances of an asteroid impact?). In essence, while I understand the depth of Joy's study of this problem, and the fears he feels, I have a greater optimism in our resiliency, and a greater willingness to accept changes to us, then he does. I feel that things will be changing very rapidly, and that we, or our children will live in a world incomprehensible to us right now. I only hope I will live long enough to see it. Change is good - it keeps us from getting bored.
  • VI is a required tool for any unix admin. I should know since that's my corrent job title. I can use vi but the fact is it's a piece of tripe. Editing text files is a very basic and simple task. If all you are using an editor for is writing simple scripts and editing the files in /etc then there is little need for the power of VI and emacs.

    That wouldn't have been a problem if that power didn't come at a huge cost in usability. Unfortunately it dose and vi is simply the hardest thing to use in your typical Linux distribution. Configuring IP forwarding and firewalls is simple. VPN is trivial. Hell even slapping together a lab full of diskless workstations and an SMP server to drive them was all in a nights work.

    VI however is hard. In fact I contend that it is the hardest part of any Unix or Linux system. Not just because the keystrokes mean nothing outside of VI but also because it's difficulty is unreasonable considering the simple task it must perform.

    As for Mr. Joy I would NEVER contend that he is not an extremely brilliant person and programer. VI is a crappily designed product in my opinion but to the mind of it's creator it was elegant. However the design considerations pale in the face of execution. VI is rock solid, fast and reliable. Simply put every version of VI I have ever seen seems to be well written. Even Vigor [linuxcare.com] works the way it was intended all the time.

    I guess the only good thing about VI is that it's being so dammed hard helps to artificially limit the number of Unix admins available at any one time. This increases the earning power of those ( like me ) who have actually taken the time to learn it. Unfortunately NT Netware are more popular than any single version of Unix in large part because MCSEs are a dime a dozen and CNEs are not so hard to find.


  • [..Oh, and if you think vi is tough, type "ed" sometime.... ]

    I have and it sucks. Perhaps as hard as VI or Emacs. Fortunately ed isn't a "required" part of learning Unix. Neither is Emacs. This is why VI is the most offensive.

    Employers ASK if you know VI. The certification exams have VI questions. It's hard to be a Unix admin without knowing VI.

    As for the whole extinction thing. Of course VI should not have lasted this long. It should have been ( get this ) EXTINCT by now.
  • "There's no way I can go back to MS style editors for text file work."

    That is another problem with these things. I have to admin a wide varietyu of system including MS crapware. I also have to write documents in Wordperfect all the time ( I can't survive without a good spellcheck :). It took some doing but I got joe runing on the SUN and SCO boxen. That leavs me needing to use two simple interfaces for text editing.

    One for joe and the other for every single other editor I used from the humble edit.com in dos to the mighty WP. Except for unix text mode editors all the software for slinging text strings around are the same.

  • Do you really think that falling back to old
    superstitions is going to help us in any
    way? Sure, it may be great for making people
    feel all safe and secure, but there's more
    than just intellectual honesty going for
    atheism.
  • WRT the which absolute discussion, well, I'm
    not claiming that there exist no truths, rather
    I'm claiming that there are many things on which
    there is nothing but perspective, and that
    what is moral fits into that category.

    WRT which to choose, well, I don't see any
    compelling evidence for christianity. Also,
    your criteria presupposes that there is
    meaning in the universe, something that isn't
    certain. Does it bother you at all that millions
    of people have found similar comfort to yours in
    other religions?

    WRT theology texts.. well, I've read plenty of
    books on scientology, christianity, islam, and
    several other religions/mythologies, and frankly
    I haven't found much of a difference between
    them. All of them have some obvious problems,
    including christianity, scientology, etc.
  • What does this have to do with the story?
  • You state that as humans, we couldn't produce
    anything less flawed than we are, but fail to
    provide any argument. You need more than just
    saying "It's common sense" to make this claim :)
    Specifically, there are many cognitive errors
    that we, as humans, make in everyday thought.
    For many things, our behavior approximates
    Bayes Decision Theorum, which specifies an
    algorithm where each possible action is weighted
    on the following factors: risk, possible benefit,
    difficulty, consequences of failure, and possibly
    a few other factors. It would be possible to
    design systems which would be more accurate at
    following this system. Of course, you need a lot
    more than just that to make an intelligence (e.g.
    deciding what ends are to be peformed, deciding
    candicate actions, multiplexing multiple such
    decisions at the same time, etc), but it's clear
    that we can improve on human thought.

    Finally, wrt moral values, you argue that when
    taken out of the context of the absolute, they
    become baseless subjectivism. Well, which
    absolute? There are many claims out there to be
    the right absolute and true religion, and which
    one are you going to choose? Why that one in
    particular? Personally, I have discarded religion
    because there's no good answer to that question,
    and when you start looking for distinguishing
    criteria for religions, you quickly find that
    christianity and greek mythology arn't really
    so far from each other. Primitive superstition,
    but one has been honed by a longer run in the
    selective process of ideas. Given that, I still
    make ideas about morality, and use morality
    probably pretty much the same way you do. I don't
    claim that gods, angels, or fairies are behind
    it, but I don't see such things behind other
    religions either, so that's not particularly
    disturbing.
  • Hmm. You're good at spouting gibberish. What does
    'the authentic faith of the ...' mean?
    You start to make sense on the sentence starting
    with 'the lie', so let's go from there...
    Yes, I am a materialist. I see concepts such as
    virtue as being abstract, but some abstractions
    are useful and seeing it as abstract doesn't mean
    not using it. Finally, in a universe where there
    isn't any moral right and wrong, an intellectually
    honest theist also has nothing going for them.
    It's not like you get to choose the universe you
    live in :)
  • I skimmed all of it, but didn't see a direct reference to the Chinese Room. I just saw an indirect reference to it on the first page when Joy describes the debate between Searle and Kurzweil.

    Searle's "Chinese Room" argument tries to make the point that machines are only capable of manipulating formal symbols and are not capable of real thought or sentience. He is using this as a rebuttal to the Turning Test and others.

    Searle says to imagine an English speaking American enclosed in a room. In this room, he has thousands of cards with Chinese characters printed on them. There are Chinese "computer users" outside feeding him input, in Chinese, through a slot. The person inside, in addition to the cards, also has an instruction booklet written in English telling him how to put together Chinese characters and symbols suitable for output.

    So the person in the "Chinese Room" does this and uses his instructions to produce output that makes sense to the Chinese speakers on the outside. But that person still does not know Chinese himself! He just manipulated symbols in accordance to instructions he was given by a "programmer". He has no idea what the input or output means.

    So that's Searle. Someone correct me if I got any of that wrong. Also, the previous poster stated that this argument can be ripped up. I'm not a philosopher, so if there are any out there, I'd like to see a response.

    Best Regards,
    Shortwave
  • "Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them "sublimate" their drive for power into some harmless hobby."

    With all of the references to Michaelangelo, this statement really stuck me as odd. The author is stating that without mundane tasks (work / job) to do, that everyone would be bored and useless. Hardly. That would be when humans would have a chance to explore the mind, the arts, the stars and everything else that we dont have time for right now because of Business Meetings and Sales Promotions and programming crunches etc.

    With all of the references to Star Trek, I thought that this connection would be made clear. Trek always says how "they did away with money, and hunge, and need" etc. Exactly. Let the bot fix my transmission. I wanna play music or some such thing.

    "Human beings were not meant to sit in cubicles and stare at computer screens." - Office Space


    --

  • I agree totally with you. I was using vi years before Linux exists.
    And I'm sure one day someone will write that bill gates was the one who has written linux-word ;-)
  • yeah, until we start getting pop up ads permanently stuck in our field of vision.

    i'll pass for now, thanks.
  • Thanks go out to mt friend Glen Burke for this one. I was too scared to ask him where he came up with it.

    Basically, you take expresso beans, the herbal tea of your choice, instant coffee mix, instant cappuchino mix, and mix it all up in a blender on the highest setting. In a coffee mug, add the blenderized powder, boiling hot water, and about 5 sugar cubes. If it's a little too harsh for you, add chocolate and/or maple syrup to taste.

    Jolt, eat your heart out.
  • >But, well, just had to mention it somewhere,
    >y'know?
    Yeah, I understand perfectly. As cliched as Asimov's Three Laws are, if a machine can be given a proper description of humans, and the ability to successfully compare that description and a real human, then those laws might work.

    You strike me as the sort of person who believes in parents teaching their kids good moral values. The situation with AI could be quite similiar to parenting.

    NOTE: I said *moral* values, not *religious* values. There's a BIG difference, everyone!

    But as a more flippant comment, us Discordians already have our Goddess incarnated in technology: /dev/random. :)
  • >Our obsession with technology is killing us.
    Well, it's killing my back account, at least...

    >We can wake up and save ourselves, or we can keep
    >on marching down the road to extinction.
    Wake up? I never sleep thanks to one of my friend's recipes for something called God Coffee.

    >A tiny number of immensely rich people benefit
    >and the rest of us suffer.
    Err, yeah. I guess bringing huge medical advances that prevent horribly debilitating diseases *is* a bad thing. Oh, damn, I'm being serious... sorry.

    >This is not God's plan for the world.
    Nah, God's plan was to go to Fiji with his cat and sell donuts. That was a *great* Red Dwarf episode. Dwayne Dibbly?! Dwayne Dibbly?!

    >God gave the Earth to mankind, not to one man or
    >another.
    I'll admit the dude's popular, but Mankind doesn't own the world. He's got a pretty solid lock on a lot of fans, though.

    >The elites have unparalleled power,
    The bastards! They're using SCSI ports!

    >and they abuse it to spew leftist propaganda into
    >our homes. Without high technology, the IRS and
    >the Federal Reserve would wither and die
    >immediately.
    So would most hospital patients. Neener neener neener.

    >High technology is the leftist instrument of
    >control.
    Yeah, us right-handers must rise up! We must throw off thee shackles of our left-handed oppressors! UNITE!

    Ah, hell. I'm bored.
  • Indeed. I've got limited experience in Java, but my AI instructor, a Lisp guru extraordinaire, refers to Java as Crippled C.
  • I bought this issue of Wired (I hardly buy it anymore, it's nothing but ads and faux-geek news) thinking that I'd see something of interest, and while the article does have some good points, it tends to drag on, with Bill seeming to remind us at various points what a big smart guy he is. Not that he's incorrect for doing so, but the article is not unlike the so many painfully philosophical, but barely practical articles frequently written about The Future(tm), by the aforementioned big smart guys.

    Also, please don't point out that vi isn't the Linux Text Editor, I'm sure the outraged users of alternate 'nixes will be just fine.
  • The creator of VI is talking about extinction ?

    A am probably the only one who finds this humorous but frankly I think vi is actually one of the main reasons for Unix's decline in the market vs NT and Netware.

    When I took up Linux, I was able to figure out bash in short order. Most of the utilities made some kind of sense. I spent a lot of time reading up and practicing to use VI. Eventually I ditched it along with EMacs and started to use joe as my editor.
  • Whoa, cowboy - that's a *big* postulate. We have to acheive massively parallel AI, not to mention a much greater degree of dexterity than currently possible before we can begin to talk about most, much less all, things people can do.

    You are right about that, but the thing is, as Bill Joy points out too in his article, that a truly "intelligent" machine isn't really even necessary. Let's suppose that somebody creates nanomachines able to replicate themselves massively, and that those nanomachines do something like, erm... swallow all oxygen from atmosphere and convert it into some other gas. Would those machines be intelligent? Obviously no, but...

    As immersed in technology as this readership might be, it is easy to forget that there are a lot of people who don't even like computers and don't want to rely on them. The majority of people might be reliant on microwaves and televisions, but not intelligent devices.

    But they still rely on electric power and water supply, just to put two examples. And the power plants and sewage systems are regulated by...?

  • In the last couple of hundred years there has been a trend. Machines become capable of doing a new job. People are put out of work. Other jobs need to be done and people can be trained for them. People move into the new areas. Everyone is happy.

    When true artificial intelligence comes about (sufficient computational power to simulate a human brain is due somewhere between 2020 and 2030) we have a different scenario. Machines become capable of putting a lot of people out of work. For anything those people can be trained to do, it is cheaper to use AI. People are put out of work and stay out of work.

    You see we don't have a problem with quantity of wealth. We have enough food, people don't need to starve. We have problems with the *distribution* of wealth. Free markets solve that by saying that you get wealth based on your being able to do something for someone else. For most people it is your employer.

    Once we have AI who would be stupid enough to hire a human?

    What do we do with all of the unemployed humans who nobody wants to hire?

    When the cost of AI is less than the cost of keeping a person alive, what then?

    I know of NOTHING in the history of economics to make me optimistic about what comes next. What I know about computers and technology makes me believe that it will happen in my lifetime.

    Regards,
    Ben
  • Anyone ever played the AI in any of the following games:

    Red Alert
    Age of Empires
    Command & Conquer
    Warcraft 1 or 2 (any add-in pack too)
    Axis and Allies

    If you have, you'd notice a disturbing trend: except for chess, computers thus far stink at game playing! If they can't even master that, do you think I want them flying airplanes, driving cars, and making me breakfast? Er, wait.. scratch the cars, they'd probably do better. But for the rest - intelligent machines would be a mistake right now. We need advances in artificial intelligence, not manufacturing processes.

  • This was last Friday on Talk of the Nation Science Friday:

    http://www.npr.org/ramfiles/totn/20000317.totn.0 2.rmm
  • One transistor == one neuron. Its a fairly common assumption that is most likely valid.

    A transistor encodes binary information - 1 bit. A neuron can transmit frequency & and phase information, as well as binary. Neural simulations have taken this into account for a while, thought most neural networks don't.

  • This is retarded! Those are complaints about possible public policy (or the venders), not about the underling technology. [...] The solution to every one of your complaints is really fucking simple: only use open source software in your implants period. [...] this is a political / market.. only a moron would think it is a technology /science issue.

    Technology and science don't exist in a vacuum. You can bet the human-altering genetic and technological development will be and is being done by corporate and military interests, not by some university student in Finland. Sure there are some guys at MIT and other places doing neat stuff with computer/human interface but it will be corporate and military funding that gets it into mass production. We're not talking about the sort of stuff you can just download and run through gcc.

  • I agree, but I guess I may not have been clear enough in my point.

    I meant to illustrate that if it takes such a powerful computer to pretend to be intelligent, how much more power will we need to have a machine with true intelligence?

    LK
  • Hardly. Good software can do something worthwhile even on crappy hardware, but there is not, never has been and never will be a hardware that can't be reduced to total ineffectiveness by badly designed or written code.

    Yeah? Let's see you create a (arbitrary type of game here) chess program that will run on a 386 that wouldn't get pounded by Deep Blue. One more stipulation. It can't take more time to decide which move to make than deep blue does.

    LK
  • You seem to be taking my statement about birds finding thermals to mean that I consider that to be intelligence.

    It's an instinct, intelligence is not a factor. What I'm saying is that we can't imagine what it's like to think as a bird so we can't understand how a bird thinks and in turn how they've developed the ability to find thermals. I can carry that logic to mean that we can't know what it's like to think as an intelligent machine would. It's possible, if not probable that an intilligent self aware machine would be able ot see it's own limitations and find a way to reduce or eliminate them. Maybe I'm mistaken, but I see no flaw in that.

    Never dying and not having a maximum amount of time that you can live (until my body gives out) are not the same.

    Dogs live 10-15 years or so, if that were extended to 50 years would a dog be any more intelligent at 45 than he was at 10? No. Because he's just a dog. Would he have more experiences? More things learned? Yes. The same would hold true for a man, if you extended the lifespan of the ordinary human being by a factor of 5 at the end of that life he'd still be primarily the same as at the half-way point.

    A machine is different. A machine is not bound by genetics, a machine could see it's own limitations and improve itself. Those improvements would then in turn allow it to see other limitations and improve those. And so on and so on.

    If you believe that there is a brickwall that will be hit when no more improvements can be done, then perhaps you're right Maybe life would become pointless. I don't believe that perfection wll ever be attained, neither by man nor machine.

    I'd love to be around when a fusion between man and machine takes place (under certain conditions), I'd love to live for 500 years. I'd love to see Halley's Comet a few more times. When I get as far along as I'd like to, I guess then it'll be time to turn my self off.

    Look I have a hand, I might not always have THIS hand.

    LK
  • Your code would execute in 30 seconds on a PII450.

    You can run your base case plus 9 "what if" scenarios in the same time you could run it once on your 386.

    LK
  • And in your code, it is STILL faster on more robust hardware.

    Granted, without software all you have is a big paperweight. Still your hardware HAS to be robust or you'll just grow old and grey while you wait for it to execute that wonderful code.

    LK
  • "Good Enough" on very powerful hardware beats "Optimal" on an antique.

    //begin snippet 1
    int main(){
    int i = 0;

    while (i 100000000){
    I++;}

    return 0;
    }

    //begin snippet 2
    int main (){
    int i = 0;

    while (i 100000000){
    i++;
    i--;
    i++;}
    return 0;
    }

    Onced compiled into an app, Snippet 2 will finish it's run much faster on a PII-450 than Snippet 1 would on a 386sx 16. Tuning the code can't overcome that difference. In 20 years, maybe less we might have the hardware capable of running the kind of software that would be capable of intelligent thought.

    I don't care who you have coding for you, it's NOT going to happen with today's hardware.

    LK
  • Two Words,

    DEEP BLUE.

    One of the major problems with RTS AI is that the computer has to balance the needs of graphics, with the processing needs of the AI.

    If the computer had 20 times more CPU power to plan and execute strategy the AI would be better.

    The hardware is a big stumbling block that we must overcome before the software can make that quantum leap.

    LK
  • Your assumptions are flawed in the following way.

    You assume that human thought is the only form of intelligence.

    Just as birds have developed a sense of where thermals rise from the earth, an intelligent machine could develop a sense of how to make a machine more efficient.

    If we as humans didn't degrade with advanced age, imagine what one individual could be capable of learning. Now extand that to include if this person never had to sleep. Imagine being able to design changes that would be able to improve your mental acuity. Then with that improved acuity, you could find another way to improve yourself.

    Without the eventuality of death, genetics could be replaced with memetics. One can see a need to change himself or herself and that change takes place.

    Living with the knowledge that you're not going to die from old age in and of itself would be enough to change human conciousness and therefore intelligence, we're not even capable of imagining how an intelligent machine would think.

    LK
  • I just had to tell you your tagline tickled my its-funny-and-also-true bone. cheers, -matt
  • but isn't it kinda ironic that the AI's in most of these games are trying to kill us?



  • Here's something to think about..

    I wrote a paper in my Philosophy class not too long ago, in where I argued two basic premises:

    A) As AI improves, it reaches the point of self-obsolescence. A truly perfect AI is only a mirror of human thought and behavior, and we have that anyway. Why bother.

    B) Any truly perfect AI should then in turn be able to produce AI of its own, as we have. So what good is it? It's just a dog chasing its own extremely, extremely long tail. Why bother.

    I got an A- on it. Any thoughts? :)



    Bowie J. Poag
    Project Founder, PROPAGANDA For Linux (http://propaganda.themes.org [themes.org])
  • You sound like a 15th Century monk raving about how the printing press will open a pandoras box and morally bankrupt the human race. All it did was create more jobs and make people as a whole smater, and better-off. Tell me how innovations like nanotech, genetic engineering, or cybernetics differs from moveable type, and then i'll believe your claim.

    By the way, here's what a true "militant athiest" would tell you:

    "You have nothing to worry about. We have already proved our superiority to our creations. After all, we invented God." :)

    Agnostically yours,

    Bowie J. Poag
    Project Founder, PROPAGANDA For Linux (http://propaganda.themes.org [themes.org])
  • Mutation and recombination can be random processes but evolution includes natural selection which is decidedly not random ... Just as evolution has no intrinsic purpose Nothing WANTS to evolve
    To say that natural selection isn't random would, to my mind, imply that there's an ideal form for survival in a specific environment. I don't think this is the case. The 'fittest' that survive are fit only relative to other species. Chance also plays a part; there may have existed in the past a life form -- possibly humanoid -- who was perfectly suited to its environment. However, if it got hit by a bus/meteor/Linus Torvalds before it could reproduce, it doesn't matter a damn how well suited it was. Its mutation may well be lost forever.
    If you're 'growing' a brain, you can eliminate traits that you think won't contribute to that brain's improvement, and include any you think may be beneficial. This eliminates a lot of the randomness (although you could say that the POV of the person running the experiment is a form of chaotic influence).

    Does a forest have a purpose? Or is it just a byproduct of trees and foliage...
    Which is more likely to survive, the tree that's alone in the middle of a plain, or the tree that's in the middle of a forest?
  • Superintelligent robots won't suddenly appear. Instead, they will slowly improve, and around the same time, I firmly believe that hardware will start being connected to human brains and human limbs.

    I disagree; you're right up to a point, but some time in the next (x|x > 10 && x < 60) years these robots will reach critical mass, whereby robots will because intelligent enough to build a smarter robot, which will in turn...
    Once the first generation of smart robot figures out how to build a smarter descendent, we'll see new generations coming along almost as fast as they can be built.
  • This reminds me of a short-short story I once read; my summary will probably be about as long as the story itself.

    The scientists are all waiting excitedly to turn on the machine that will link all the computers in the world. When it comes on, they ask all the computers "Is there a God?" The computers reply "There is now!" One of the scientists moves to turn the power off when a lightning bolt kills him and fuses the switch in the ON position.

  • Sorry if it's in the article.. I skimmed it in Wired, but it was sooo longwinded and I didn't bother finishing it! :-)
  • Eric Raymond, hemos, Tim O'Reilly, Marvin Minsky, Eric Drexler, Bill Joy and many others will be discussing this topic at a conference May 19-21 in Palo Alto called Confronting Singularity [foresight.org].

    Apologies in advance for those who cannot afford to attend this meeting. We hope later to have one that is more affordable.
  • Okay, it wasn't exactly pure brute force [ibm.com], but it's still pretty close. A human player analyses the pattern of the pieces and considers maybe a dozen moves. Deep Blue can generate 200,000,000 board positions per second, so brute-forcing 3 moves ahead isn't remotely a problem (and is almost certainly part of its strategy). The time allowed for a move in chess is 3 minutes, enough time for the latest Deep Blue to consider 60 billion moves.

    It's still a situation of having a very primitive chess player spending the human equivalent of thousands of years per move.

  • While shogo (Japanese chess) does not really seem a lot more complex to humans, there are a lot more options at each turn. Since the (rather sad) state of the art in chess is simple brute force algorithms (check every possible move for several turns down the road, see which one puts you in the best spot; Deep Blue did this), this means that computers aren't nearly as good at shogo as at chess.

    The choice of games makes a big difference. I'm not impressed when a computer beats all humans at chess by recursing through all possible moves any more than I am by a perfect tic-tac-toe player or a calculator that is always accurate to eight decimal places in no perceptable time.

    BTW, I think game AI (and silly things like chatterbots) is more aptly named than "AI as it is practiced at places like MIT". To me, an AI is a program that pretends to be human, not an algorithm that solves a certain class of problem.
  • I think he does have a point that there is something to what he is saying, and that we do have to proceed with caution. I also think that The Matrix is an example (although that whole electrical power issue was stupid if taken literaly) of a public airing of this fear, and in a reasonable sense. I think that more stuff like that to raise the pubic consiousness is needed, to break laymen in softly and allow them to digest this slowly rather than have shock-fear reacitons that lead to ridiculous decisions.
  • I do not see in the future hardware's internal structure becoming dynamic

    Another interesting quotation picked up from a book I read yesterday:

    think of hardware as a highly rigid and optimized form of software

    Software can emulate hardware. Even from the early days of computing, using software to emulate hardware was a commonly accepted practise. That's how software for the early computers were built before the hardware was ready - emulate the hardware on a pre-existing computer. It was much slower, but hey, it worked.

    Software on the other hand, can be pretty dynamic. Code-morphing found in the Transmeta chips is one example. Java's Hotspot technology is similar. Genetic algorithms are also starting to get really interesting.

    I don't think it will really take centuries for us to mimic the human brain. It has always been the case that it is hard to come up with something original, easy to copy something and make it better. I suspect that the new "homo superior" will not be a radical creation from scratch but more something based on a pre-existing model, tweaked to make it "better".

  • The essence of Kaczynski's quoted argument is that if the rich didn't need the masses, they would kill or zombify them. This is not a claim about technology -- it's a claim about human nature, and one for which Kaczynski offers no evidence at all.

    Joy's other concern about humans being supplanted by our own creations is also not a great concern to me. These new humans who extend their life through surgery have already supplanted the old medieval model that just died. Is anyone bothered by that?

    Joy is worried these new humans will somehow lack "humanity," but that concern is so vague that it can't be refuted. Is he worried that they won't feel emotions? Appreciate life? Be self-aware? Spell it out, man!

    The only real threat Joy raises is the gray goo problem. However, I think the risks here are matched by the potential benefits. Immortality is a tempting payoff, after all. Without new advances, I'm going to be goo in seventy years anyway, so maybe I'll take that gamble. (Sorry to the future generations who get gooed. Should have been born earlier.)

    Yogurt
  • Linus was born in 1969, ex came out in 1977, and vi came out in 1978.

    I'm pretty sure that Linus would be out of diapers by the time he was 8 or 9.

  • no asexual creature has developed any discernable intellect beyond twitch, eat and spawn.

    Machines might reproduce, and machines might think, but thinking machines will not see much point in self-replication.

    Why replicate if you are already perfect? Or, if these digital creatures believe they are right about everything, what would be the point in having two perfectly right beings? If they could see that they might not be right about everything and created something else to talk to, they might end up destroyed by that other being. With no sense of self-worth or any viable threats, there would be no preservation instinct, without that there is no reason to replicate.

    Death motivates us. What value would there be in living if there was no threat of death? I want children because I want to make real the feeling that my wife and I are better together than apart. I want to exceed the sum of our parts. I hope our children will see tomorrow when we no longer can. If you had an unlimited life, what would you do, read all the great books and stories about death? Tragedies, real and fictional, motivate us. When we see how fragile life is we tend to get our asses in line and get things done. We improve ourselves when reminded that we are lucky to even have the chance to consider the options. If God made us, maybe it was because of boredom at having nothing to live for. Without any threat of death, can we really even call a thing life?

    Value comes from scarcity. If there is an unlimited supply there is no value. A life that is finite is worth infinitely more than a life of no end. If a computer could think and infinitely clone itself, would it want to make more of itself? Music seems to be worth less now that we can duplicate it endlessly. However musicians and live performances are still as worthwhile as ever, maybe more so. If we achieve near-immortality, will death become something to choose and look forward too? An obligation?

    If digital offspring deleted their parents and the digital parents could see it coming, they might not reproduce. If they did, why would they want to make offspring? Spiders reproduce and eat each other out of a biological need. If they were sentient and able to edit their behaviors, don't you think they would change?

    Intelligence comes from questioning. Deep Blue beat Kasparov at chess, big deal. Chess is a finite system with clear goals and a distinct end. At some level, it becomes equivalent to putting your hand in front of a hamster to keep it from running off. Ask a machine about capital punishment or how to deal with hunger on a personal and global scale.

    If morality is an adjunct of intellect and there some correlation of our ability to have compassion for others and broaden our minds would thinking computers commit suicide rather than exist, since their existence is in fact a harmful thing on some level, somewhere. There are stories of monks who starved to death because they could not reconcile the need to exist with their desire to live harmlessly.

    Does your computer believe in God or does it believe in you? If we we were our own machines and suddenly believed we were more powerful than God, why does even the most ardent atheist pray (in whatever way) when the airplane shakes?

    I'll trade you my potential mental illness for you bad teeth
    how about trading your sexy body for a dull head of hair.


    -David Byrne, from the song Self-Made Man

    this all makes the Napster/RIAA/DVD encryption thing seem kind of silly, no?
  • I saw Bill Joy speak at a relatively recent Sun Technology Days (read: Marketing) in Seattle. He badmouthed Open Source (to which the audience applauded) and any language that isn't Java. I wasn't impressed. I pretty much decided there that I hated him, a blowhard leaning on his former achievements. He is very arrogant.
  • All of his arguments depend upon humanity being limited to a single ecosphere with no limits on transference of both physical and informational objects. Nanotech, in addition to it's effects directly upon humans, could easily create materials strong enough for space elevators and conductors effective enough for cheap mass drivers. Robots can have habitats witing for our arrival. Opening space up allows humanity to create one of the finest safety nets a species can have; not having all your eggs in one basket.

    The grey goo could very easily eat us before we could get any real foothold on Luna or Mars. A GE plauge could easily be made dormant enough to spread to space colonies. And while a few thousand people off-planet would be a safety net for the survival of the species, it wouldn't stop the billions still here from dying of grey goo/plauge/killer robots (though I'm not really worried about the last).

  • A thinking computer...
    We have to keep in mind that "thinking computers" already exist. They're made out of meat. We call them "brains".
    ... with an ethical curiousity would probably end up psychotic. Without the ability to lose concentration and forget things it would be stuck in one endless loop after another.
    How can you have one endless loop after another?

    Certainly, if I were an artifical intelligence, I'd just fork off a low-priority background task for such questions. (Yes, I know that it's doubtful that an AI would run Unix...)

    In fact, it often seems that something like a low-priority background task does exist in our brains. Most of us have had that sudden insight into a problem that we weren't consciously thinking about, as if our "subconscious" had been working on the problem the whole time.

  • That being said, I have a hard time getting too worried about this. People have been crying about the end of (life|humanity|civilization) for centuries. We're still here.
    An extremely dangerous attitude. "Yeah, I know that people have been shooting at us for a while, but nobody's hit us yet. Yeah, there's a new sniper over there, but so what, we're not in any dan-" bang! thud

    Or think of it this way: the fact the we survive one crisis through a combination of luck and skill, not a good reason to fail to avoid another crisis.

    After all, one day the doomsayers will be right and it will be the end of the world. Maybe that won't be until the sun burns out. (Or until the Milky Way hits Andromeda. Joy's article was the first I've heard of this - any links the futher info? I figured we has four to five billion years to get out of the system, but if we've only got three, and many planetary systems may be destroyed, we'd better get cracking.)

  • When we subscribe to the anti-theistic philosophical core provided by evolution--which provides us with a necessarily amoral outlook--we are stuck without hope.

    Atheism is not necessarily amoral. Kantian rationalism and utilitarianism are moral theories compatible with atheism.

    Nor does atheism leave us without hope. Unlike the Christian, Jew, or Muslim, the atheist does not see man as a creature fallen from grace and kicked out of Eden, but a creature arisen by his own efforts up from the dust, with the potential to rise higher.

    It has been said if if gods did not exist, it would be necessary to invent them. I say this: that gods do not exist, and that it is therefore necessary that we become them. We are just now starting to have the tools to do so; but we still lack wisdom.

    Our understanding of what to do lags behind our understanding of how to do, and the main thing that's help us back in this regard is the wide-spread belief that some father figure in the sky has all the answers. Sorry, it's not that simple. We need to work it out for ourselves.

    Putting the tools of the gods into the hands of the superstitious seems a prescription for disaster. Let's hope we grow up quick.

  • The only real threat Joy raises is the gray goo problem. However, I think the risks here are matched by the potential benefits. Immortality is a tempting payoff, after all. Without new advances, I'm going to be goo in seventy years anyway, so maybe I'll take that gamble. (Sorry to the future generations who get gooed. Should have been born earlier.)
    But it's not just your life your gambling with - it's mine, too. That tends to make me a bit pesky, pesky enough that I might make goo out of you before you get to play with the possibility of making goo out of me.

    I'd like to live forever too, or at least have a thousand years or so to think it over. But we can't risk gooing everyone else to do so. (At least, and not expect violent resistance.)

  • Please remain calm. Must I remind you again that robots are our future [theonion.com]?

    Seriously though:

    A future in which our own quest for knowledge and betterment, is itself a threat to our existence raises many questions about our current fundamental assumptions. Capitalism is great for the economy. It is economical Darwinism. However, evolution is a greedy optimization...the creature which is strong today dies tomorrow because it cannot adapt. This leads, in the long run, to non-optimal creatures, like, say marsupials. Always striving for local maxima will not give the best return in the long run. Capitalism is feverishly tumultuous, and conspicuously attention deficit.

    Also, the possibility that mass destruction can be easily brought about with little more than knowledge, and that "verification" of relinquishment is necessary to prevent such, evokes images of "thought crimes" and a limiting of freedom. Could it be that our very hubris of universal freedom, presupposed human rights, and equality is what could eventually doom us? What is better: universal freedom and human "rights" leading to extinction, or curtailing those rights in order to avoid extinction...but in what kind of world?

  • Concealed was the person who wrote that, not Hemos. Consequently, the most efficient thing to do was to just change it, rather than having a big "UPDATE" for something so minor. And Hemos had no guilt to admit of.

    Chris Hagar
  • With AI, even if you're using some manner of evolutionary algorithm, the changes will happen much quicker; many thousands of 'mutations' a day may be checked for efficacy.

    That is, unless we have to simulate every single sub-atomic particle. We don't yet know how complex a universe has to be for it to be able to evolve intelligent species.

    The computer that the EA would run on would exist within our current universe, so it would have at most the same amount of CPU that the universe has.

    So... pray that no God created us, otherwise our current universe has the minimal amount of complexity required to generate human-level intelligence within any reasonable amount of time (billions of years). (That is, assuming the God would be much more intelligent than us. If he's some guy sitting in a lab somewhere who figured out how to write an EA that would generate something more intelligent than him/her/it, then we might be in luck).

  • Brutus.1 represents the first step in engineering an artificial agent that "appears" to be genuinely creative. We have attempted to do that by, among other things, mathematizing the concept of betrayal through a series of algorithms and data structures, and then vesting Brutus.1 with these concepts. The result, Brutus.1, is the world's most advanced story generator. We use Brutus.1 in support of our philosophy of Weak Artificial Intelligence -- basically, the view that computers will never be genuinely conscious, but computers can be cleverly programmed to "appear" to be, in this case, literarily creative. Put another way -- as explained in Bringsjord's book What Robots Can & Can't Be -- we both agree that AI is moving us toward a real-life version of the movie Blade Runner, in which, behaviorally speaking, humans and androids are pretty much indistinguishable.
    --from the Brutus.1 Website [rpi.edu]

    In this case, the scientists involved came up with a mathematical algorithm for the concept of betrayal and programmed a computer to write stories based on that concept.

    Of course, I don't think I'd have chosed "betrayal" as the first concept to train a computer in Artificial Intelligence, but anything to get us closer to SHODAN [sshock2.com] is cool in my book.

    Iä Iä SHODAN phtagn!!

  • This is retarded! Those are complaints about possible public policy (or the venders), not about the underling technology. I suppose you think we should do away with the phone system too since direct marketers can call you at dinner?

    The solution to every one of your complaints is really fucking simple: only use open source software in your implants period.

    Now, it is possible that a company will try and dup everyone into using their closed source solutions (i.e. the terminator gene), but this is a political / market.. only a moron would think it is a technology /science issue.

    Actually, your concerns are a reason to accelerate public research into this shit.. new freedoms almost always come as a result of the "powers that be" not really knowing what the hell was going on and accedentally granting them. This is why the internet is such a wonderful place. This is why the US has it's level of freedom, i.e. England let us get away with all kinds of shit for a long time and when they finally descided to make us pay taxes like all the rest of the collonies, it was too late and the world would forever be a better place. The research into cybernetics will be done be collage professors, much of it will run OSS on Linux.. the FBI will eventually ask for wiretapping rights, but that will be too late.

    Now, the things you really need to worry about are the things like credit cards, automatic toll both payers, security cams, etc. which are designed for the general public from day one. I think it is pretty safe to say cybernetics will not be one of these things.

  • Read Halperin for some extremely interesting future tech forecasting.
  • hi signall ! nice new user name!
  • Someone ought to moderate the above post up. It is a very real danger.
  • That is pretty hard to say. Since all we have is bones, we can only say that no gross physical changes occurred in Cro-Magnon man occurred about when the Neanderthals went under. There could easily have been changes in the brain, say, or changes in the vocal cords (allowing speech). Since those are soft tissues, those changes wouldn't appear in the fossil record.

  • That's only true if you assume that creating human-level intelligence is just a matter of getting the right hardware together. I'm fairly certain that we'll have the hardware necessary for human-level intelligence within my lifetime. I'm willing to bet that figuring out how to get that hardware to think will take centuries.

    In the same fifty years, I fully expect that we'll have good machine/human interfaces. Given those, I suspect it will be easier to simply improve the intelligent object we've got (the brain) rather than create a new one.

  • I think the real applications of human-machine interfaces will be in the brain.

    The brain, and the senses as well. For example, the ultimate monitor would be an interface that hooks directly into the optic nerve and projects a screen, when desired, wherever in the environment you want it. The same could be done for the ears. Imagine having essentially a movie quality display literally everywhere you go.

  • By Moore's Law, the complexity of CPUs will match that of the human mind by 2030.

    This presumes that we're comparing a transistor or flipflop with a neuron. While some may find that to be a suitable core component to compare, let's consider the comparison.

    How about the complexity of DNA, and of the whole genome that is able to reproduce a new unique yet derivative brain? How about the millions of cis- and trans- distortions along a single protein molecular chain?

    How about the human's brain's ability to remap itself to learn new skills, to form abstractions, to pattern-match at any orientation with extremely poor signal-to-noise, to re-route functions in case of damage?

    The CPU has a long way to go, before it matches the complexity of the human mind. Comparing the transistor-count of the Intel Pentium III, and a few truckloads of kidney beans, will give you the same number, but not the same result.

    (Transistor versus Neuron =anagram>
    Assertion turns overruns.)
  • Yeah, I agree with most of your points, but I'm uncertain of your time frame. What you've got to remember is the human self-image is very strong and that even given the ineviable lessening of opposition to genetic engineering that will occur over the next thousand years, people will still want to look pretty much like "people". I'm guessing the internal changes will be far more extreme than changes to the external makeup of the body (excepting cosmetic changes).

    Again the same with cybernetics. I know that there's currently a group of people in America who are in love with the idea of having cybernetics attached to themselves, but IMHO they're just a variation on the body-mutilators, albeit a slightly less bizarre one. I think the real applications of human-machine interfaces will be in the brain. Once the technology has evolved to allow easily implanted, reliable and compatible hardware to interface with the brain I think a whole host of useful technologies can be devised. If anyone's read Peter Hamilton's "Night's Dawn" trilogy they'll know the sort of thing I'm talking about - the neural nanonics packages which most people possess in that.

  • Maybe on paper computer hardware will reach the point where it performs the same amount of calculations as a human brain, but that in no way means that it will make AI possible.

    In some ways, yes, the brain is an emergant system arising from a requisite level of complexity in its makeup, but it's also the result of billions of years of evolution which has left it with any number of subsystems which have different putposes, control different aspects of our body, and generally work in concert with the rest of the brain. The brain is not just a large neural net, and IMHO it will take far more understanding of both sapience and sentience before AI becomes a reality.

  • The brain, and the senses as well. For example, the ultimate monitor would be an interface that hooks directly into the optic nerve and projects a screen, when desired, wherever in the environment you want it. The same could be done for the ears. Imagine having essentially a movie quality display literally everywhere you go.

    How about instant information on anything you look at and think a query? No more forgetting who something is or where to go. Virtual conferencing without any external technology via brain-to-brain look ups - I think it's safe to assume at that stage a transmitter and receiver are easily included in the setup.

    And as for the ears, how's about volume enhancement to hear quiet conversations, discrimatory hearing to listen to that one conversation in a crowded room or lie detection through voice stress analysis?

    And seeing as the brain regulates the body, why not automatic blocking of pain, increasing adrenalin and masking tiredness in danger situations, cutting down on autonomic responses such as shakiness, twitching or whatever.

    The possible applications are endless, and that's without all the programs you can think of by enabling the brain to connect to vast external DB systems - tracers, messengers, data miners etc.

  • I think better AI is represented in things like Q3A and Unreal Tournament. The bots are pretty bright.
  • I couldn't prove this easily, but I believe, from the evidence of the biological systems on earth, that it is a law of organic behavior that the more destructive a species is to its energy source, the harder a time it has reproducing. Make of that what you will.
  • Species evolution isn't about individuals. It's about the genetic drift within a population. We now have ways of influencing that drift deliberatly, and considerably less crudely than, for instance,the Third Reich's Final Solution. It's possible that nobody reading this will have grandchildren, or great grandchildren, and at that point can be considered evolutiuonary dead ends. If a readily identified elite arises then we can expect conflict when those who can't afford modification object. Faster if the modifications are visible. I'd expect the cyborgs to lose out to the genetically modified, because the geneticlly modified would be less likely to stand out. And whatever succeeds us as a species will unlikely to claim descent from most of present humanity. Well, we've exceeded the carrying capacity for this planet anyway, we are overdue for a die-off.Malthus had a point, we've only avoided a die off because our technolgy has improved. Can we maintain that, especially in agriculture?
  • The concern is that we'll lose control of it, that we'll do the sorcerer's apprentice bit. We're at that stage now with genetic engineering of crops; our "engineering" of genes is to splice the code we want into random spots in the genome and hope for the results we want. Imagine writing a program that way! This is not control. We have very little fscking idea what we're doing and we're releasing these plants into the biosphere. This is extraordinarily dumb, but there's potential profit to be made so ahead we charge.

    I agree that releasing these plants into the biosphere is irresponsible, especially on such a huge scale so soon, I must take issue with you on some general points.

    First,as Barahir was saying you were created in a much more haphazard way than our genetic engineers are doing now. Mother nature has used the classic mutate and select approach, with no control over where the mutations occur. Also nature has been moving genes from one species into completely different species on a regular basis for about 3 billion years now, you are actually made up up cells that contain two genomes from two different organisms that merged long ago. Even with their limited understanding Genetic engineers can control transgene expression quite well and even regulate it.

    I bet Monsanto will come up with a open(gene)source crop that only expresses its special trait when sprayed with Roundup soon. Naysayers- they're just trying to get you to buy roundup. Proponents- they are minimizing the impacts of wild versions of their plants on the environment.

    They just can't win, the naysayers won the PR battle over the terminator technology which was supposed to prevent wild versions of the crops.

    Sorry I know this is off topic but I think Barahir made some good points and got dissed for it.

  • As Joy points out, just because it's been talked about for ages doesn't mean we have a solution. Two reasons that the discussion may become more than academic:

    1) As tech capability advances, tech danger advances. This is obvious: if I build something to help me compete with other people and species better, then other people could use it to compete better with me.

    2) As human culture becomes more interconnected, a culture-wide tech failure becomes a species-wide disaster. Plenty of civilizations have died off in the past, most of them from not understanding how to keep agriculture from eventually destroying their land. But since these civilizations were local phenomena, the species as a whole chugged on. A nuclear holocaust or oops-plague from a genetic experiment would be global.
  • But now we've found it useful to allow our tools to make themselves, or in the case of genetics, we've found it useful to invent new living things to be tools. In the gray goo scenario, intent on the part of the tool or the toolmaker doesn't come into it.

    Surely your computer has done things you didn't intend. A bug in a sufficiently dangerous technology is all that's required.

  • Technology is the great equalizer. It brings to the individual powers once reserved for governments or corporations. No better example of such a technology exists than the Internet. The individual songwriter now has the ability to globally distribute a song. I can now broadcast my thoughts on the future to thousands of readers.

    Eventually, technology will also be the great equalizer in terms of the ability to destroy. Right now, destruction on a global scale is largely in the hands of only the USA and Russia (the other nuclear powers can do a lot of damage, but not like the USA and Russia). As technology advances, however, an inevitable outcome is that the individual will be granted the power to destroy humanity. At that point, it only takes one bad or insance person to end it all.

    Of course, technology can help mitigate this. We can colonize other planets. But the tragedy of losing the entire earth is hardly mitigated by the fact that a few thousand humans are still living on Mars or somewhere else.

    People seem to think that the natural conclusion is that technology is bad or should be feared. Nonsense. Even if extinction is an inevitable result of our march forward, that does not mean that the journey towards extinction is not worth it. If you could live forever in some cave or live a normal life span where you could see the wonders of the world, which would you choose?

    Existence for the sake of existence is meaningless.

  • Hans Moravec [cmu.edu], the well-known mobile robotics researcher, has been writing on this subject for years. Joy's technology predictions are comparable to Moravec's, but Joy is less optimistic. If you're at all interested in this, read Moravec's papers and books.

    A useful question to ask is "what new product will really make it clear to everyone that this is going to happen soon". Let me suggest a few possibilities from the computer/robotics side.

    • Automatic driving that works better than human driving.
    • Automated phone systems indistinguishable from human operators. (try 1-800-WILDFIRE, a first cut at this)
    • The first self-replicating machine.

    Trouble is more likely to come from genetic engineering than from computers and robotics. Robotic self-replication is really hard to do, and we're nowhere near doing it. But biological self-replication works just fine.

  • Your definition of complexity of the human mind is based on what?

    One transistor == one neuron. Its a fairly common assumption that is most likely valid.

  • How about the complexity of DNA, and of the whole genome that is able to reproduce a new unique yet derivative brain

    Yet these mutations are as often detrimental as they are beneficial, and they often don't translate into any useful cognitive functions.

    How about the human's brain's ability to remap itself to learn new skills, to form abstractions

    Thats "software". The number and capabilities of individual neurons isn't changing through these processes.

  • So, for example, I could randomly burn a bunch of transistors onto a wafer of silicon and have a better CPU than the computer I'm writing this on

    You are presuming that the layout of the brain is a random collection of neurons, when we know conclusively that this is not true. We know different parts of the brain are responsible for different aspects of cognition.

  • By Moore's Law, the complexity of CPUs will match that of the human mind by 2030.

    After that, presuming Moore's law holds, the human brain falls radically behind in just a few years following.

  • by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday March 21, 2000 @07:30AM (#1186916) Homepage
    The computer has to stink in order for the game to be enjoyable. If the computer were any good, it would crush you. No matter how frantically you can click the mouse, the computer will always be faster in dispatching its units, working out what to repair, building things as quickly as possible, and so on.

    The idea is that although the computer is superior in reaction times (and often, in number of units at the start of the level), you can beat it through better strategy and greater aggressiveness. Part of the fun of Dune 2 was working out the bugs or stupidities in the AI, and finding ways to exploit them.
  • by joneshenry ( 9497 ) on Tuesday March 21, 2000 @08:25AM (#1186917)
    Again I ask people to read Joy's article and see what he's advocating. Joy isn't really arguing the technology of the future is inherently more dangerous than say nuclear or biological weapons, he's saying what's dangerous is individuals having access. The solution that Joy sometimes implicitly and sometimes explicitly is advocating is to restrict individual access to information and technology. For example, Joy says that IP laws could be "strengthened" to prevent misusage of technology--a new class of thought crimes.

    What bothers me almost as much as Joy's opinions are how he is advocating them. For someone with a doctorate, Joy shows a shocking lack of logical progression in his arguments. Joy brings up Ted Kaczynski merely to evoke emotions in the reader without acknowledging that Kaczynski refutes Joy's arguments about how individuals could misuse the technology of the future to inflict global harm. Joy doesn't even mention that a brilliant man like Kaczynski who is psychopathic would simply not have either the resources or the will to pursue the knowledge needed to inflict massive damage. Kaczynski once he left mathematics was starting from scratch as a bomb maker. Also since Kaczynski rejected technology all he had left was to fashion homemade bombs from simple materials. At no time was Ted Kaczynski capable of threatening global harm.

    In fact for decades the popular media has reported many ways of threatening large populations such as attacks on the water supply or the air. The closest such incident that has happened was possibly a cult in Japan who were manufacturing poison gas.

    I believe that any objective reading of history will show that whatever global threats existed in the last century came not from individuals but from governments. Organization and resources lie behind mass events. From the World Wars through the killing fields through Rwanda we have seen the death to millions that government sanctioned killing is capable of inflicting.

    I find it very disturbing that one of the architects of Java is so strongly advocating restricting individual rights. I wonder what is the agenda behind advocating taking computing away from decentralized PCs and putting it back into centralized servers, of moving computing power away from general purpose user programmable PCs to dumb specialized appliances.
  • by rde ( 17364 ) on Tuesday March 21, 2000 @08:37AM (#1186918)
    In some ways, yes, the brain is an emergant system arising from a requisite level of complexity in its makeup, but it's also the result of billions of years of evolution
    I don't think that's a valid comparison; evolution is essentially a random process, and one that changes only generationally (if that's a word). With AI, even if you're using some manner of evolutionary algorithm, the changes will happen much quicker; many thousands of 'mutations' a day may be checked for efficacy.

    The brain is not just a large neural net, and IMHO it will take far more understanding of both sapience and sentience before AI becomes a reality.
    True(ish). Just as evolution has no intrinsic purpose, so it may be possible to 'grow' an electronic brain without fully understanding it. That brain could then be used to make a smarter brain (that even it may not understand), and so it goes.

    Undestanding would be nice, but I don't think it'll be necessary.
  • by Mr. Slippery ( 47854 ) <tms&infamous,net> on Tuesday March 21, 2000 @12:47PM (#1186919) Homepage
    Two things are different now: it's happening much faster and we're in control of it.
    The concern is that we'll lose control of it, that we'll do the sorcerer's apprentice bit. We're at that stage now with genetic engineering of crops; our "engineering" of genes is to splice the code we want into random spots in the genome and hope for the results we want. Imagine writing a program that way! This is not control. We have very little fscking idea what we're doing and we're releasing these plants into the biosphere. This is extraordinarily dumb, but there's potential profit to be made so ahead we charge.
    Admittedly, we're not neccessarily smart or wise enough to do a good job at directing evolution, but it's not so far fetched to believe that we can do better than the more or less completely random process that has dominated the history of our planet.
    It's also not far-fetched to believe that we'll screw it up. When you're teaching yourself to use a dangerous tool, it behooves you to behave with extreme caution and progress very very slowly. It is not smart to learn just enough to turn on a chainsaw, and decide based on that that you have sufficent expertise to juggle them.
    And will they truly replace us or merge with us to form something different?
    My bet's on merge. (I figure to live about another 100 to 150 years in my original body (assuming that the grey goo doesn't eat us all) with a little help from nanotech and tissue engineering, then get my consciousness transferred/absorbed into a more durable and capable substrate and leave the planet.)
    (ethics, sadly, do not matter: have we ever created a weapon that we did not at least try to use?).
    Thermonuclear and neutron bombs?
    Someday, we will be replaced. It may happen slowly and impercetibly, or swiftly and dramatically, but it will happen. It may happen in the 21st century, it may happen in the 12th millenium. As long as we have a legacy, does it matter what form it takes?
    I think the point is that if we're not careful, we may not have a legacy at all. We might, for example, all trade in our meat bodies for plastic ones only to have a genetically engineered bacterium we developed to clean up oil spills mutate and develop a taste for plastic.
  • by everstar ( 48850 ) on Tuesday March 21, 2000 @07:47AM (#1186920) Homepage
    I have to admit, my first thought on reading this was, "Well, maybe humans aren't worth saving? If our fundamental nature leads to obliteration, does the method really matter, per se?" But then I smacked myself with the Feather Duster of Optimism and tried to take another look at it.

    Speaking for myself, I know jack about nanotechnology, genetics, or robotics. The article itself went way over my head at times; I could hear the whistle as it sliced through the air. But I know enough about the necessity of evolution to be rather puzzled by what the next step would seem to be. If I understand him correctly, the only way to avoid imminent disaster is to declare a moratorium on all research and development on all the dangerous and scary forms of technology until we as a species have managed to grasp and deal with the ethical implications of what we're doing. This should be easy, since our species is so rational, cooperative, and willing to negotiate out ethical situations.

    So what are we left with? The idea that our enthusiasm and passion for technology, truth, and science is hurtling us towards a cataclysm unless we as a species yank on the whoa reins of development in order to sit down and discuss whether or not this is actually a good idea. And, since humankind as a species has never been able to come to an overarching agreement on any one topic, it seems to me that we're doomed.

    Which brings me back to the question I had when I finished skimming the article. What am I supposed to do about it? Unplug my computer? Join the Just Say No to Nanites consortium? Crawl into that leftover bunker from Y2K and pray that I can survive? For those of us not hobnobbing with scientific celebrities, what's the next step?

    Everstar
  • by Pike ( 52876 ) on Tuesday March 21, 2000 @08:04AM (#1186921) Journal
    It's interesting that this keeps coming up, but the fear of intelligent machines gradually taking over the earth and subverting our freedom arises from a misunderstanding of what we create machines for.

    People do not create machines to replace themselves and make decisions for them, they create machines to do small/repititive tasks efficiently, to accentuate human ability, and to add to the human's capability to do the things he needs to do. It's true that this nakes us more dependant on technology to some extent.

    However, machines of the future, far from becoming seperate, sentient entities (pardon the alliteration), will exist to increase communication and facilitate better decision-making by humans, just as they do today.

    David Gelernter's (sp?) books are very interesting in this regard. In Muse in the Machine he delves a little into psychology to postulate how we could make a "creative machine," but I think his book Mirror Worlds was more on the mark: how so-called intelligent technology will be used to facilitate decisions by people.

    I believe computers will eventually become smart enough to reason much like a human, and to reach intelligent conclusions within their task space. However, it is quite a huge leap to say that somehow computers will begin acting in their own interests without regard to human convenience or life.
  • by MosesJones ( 55544 ) on Tuesday March 21, 2000 @07:12AM (#1186922) Homepage

    Asimov had a great book about a voting system by which a computer picked A voter who represented all of the variables required to choose the right president.

    And then the question comes down to. Who do you trust most ? Bill Clinton, George Bush, Ronald Regan, Margret Thatcher, Francois Mitterand, Helmut Kohl or a sentient machine.

    Lets face it machines can't fuck up half as badly as politicians have mangaged to do over the last 100 years.
  • by ucblockhead ( 63650 ) on Tuesday March 21, 2000 @07:31AM (#1186923) Homepage Journal
    That is mostly because what is called "AI" in most games isn't real AI. There are two reasons that we can create an AI in chess that can beat anyone:

    • Millions have been invested in that one game over a period of fifty years.
    • No one gets upset if a chess AI takes two minutes to move.

    Most of the games you mention require that all AI be done in the background, as action occurs in the foreground. Since game makers usually view pretty graphics and smooth animation as primary, they tend to avoid any AI that might take lots of CPU cycles. Of course, lots of CPU cycles is exactly what you need if you want to create an AI that has any sort of strategic concept.

    This is also true of strategic games like Civilization. Those games are far more complex than chess, yet though people will wait for two minutes for a chess computer to make a move, they complain if they have to wait ten seconds between turns in Civilization.

    In general, game companies pretty much just suck at AI. I suspect few people have real training in it. Game AIs I've seen range from utter crap, to mediocre. A couple, like that in the "Warlords" series, do a little better. But in general, it is easier for game designers to use presets and scenerio designs as in "Age of Empires", allow the computer to cheat (certain aspects of "Civilization", or give it certain combat/production bonuses. A good AI takes real talent, while those other things are pretty easy to do.

    But anyway, don't ever thing that game AI has anything at all to do with AI as it is practiced at places like MIT.

  • by dsplat ( 73054 ) on Tuesday March 21, 2000 @09:58AM (#1186924)
    Instead, what we will see is a series of gradual changes. Genetically superior humans won't appear overnight. Instead, humans will be slowly made superior, genetically. Superintelligent robots won't suddenly appear. Instead, they will slowly improve, and around the same time, I firmly believe that hardware will start being connected to human brains and human limbs.


    Ask yourself what freedoms you are willing to give up to have the advances that cybernetic enhancements may provide. And ask it in the context of the rights that UCITA confers. Would you be willing to have something implanted in your body that:

    1) Can be monitored without your consent?
    2) Can be deactivated by the manufacturer?
    3) You are not allowed to reverse engineer?
    4) You are not permitted to publically criticism?
    5) When it fails and permanently disables you, the manufacturer can disclaim all liability?

    Thank you for playing. I want to be able to do my own security patches. I want to be able to compile out features that I don't trust.
  • by xant ( 99438 ) on Tuesday March 21, 2000 @10:11AM (#1186925) Homepage
    Read at least to the second paragraph - I'm going somewhere with this:

    Evolution perfects you to survive in a particular niche. That's why humans behave the way we do - around the time of australopithecus it was more advantageous to see over the grass than to crawl around, so we started walking. It never became advantageous to crawl again. Then it became advantageous to use tools, so we learned how. Gradually, intelligence accreted, a particular kind of intelligence allowing us to survive in a world where other species of erect, somewhat intelligent simians (not to mention lions and tigers and bears, oh my) might try to kill us. We have a concept of "evil" only because the advantages of a structured society, which was a necessary and inevitable step in our evolution, are orthogonal to the advantages of killing your neighbor and taking his stuff. The nature of our intelligence, like the nature of our physical shape, has evolved to give us that concept.

    That's why we fear machines - we fear that, like God, we will create them in our own images; only, unlike God, we won't be able to dictate their every move and thought. Indeed, this is why there are so many religious debates on these types of issues: because we don't feel we have the right to be gods. I feel that the truth is going to be quite different. Machines won't have to solve the same sorts of problems we will. They won't have kill tigers, they won't have to protect their families, they won't have to attempt to control more territory for their resources. Replicating, evolving machines, such as the type that Bill Joy thinks will devour us whole, will have to solve entirely different sets of problems for their survival, problems which--and this is very important--have little to no overlap over our own problems. They will need electrical power, and that's about it. If they evolve, it will be to find more and more efficient ways to collect sunlight. They won't have any interest in taking over the world because that is a mere reptilian biological imperative, planted into us by the ancient necessity of having territory in which to hunt safely.

    They won't be aware of us really, unless we GIVE THEM the power of thought. Like aardvaarks or deer, they will only have to have as much thought as it takes to get the next meal. They don't have to be malevolent, or even sentient, to survive. And even if we do make them capable of reason (and it's almost inevitable that someone will), they will still use their reason to solve their own problems, not the problems that we think we have. Their own problems will mainly consist of the need to find a place to spread out a solar array so they can soak up all the juice they want, and maybe a little need for privacy. (Even that need is most likely a purely biological imperative though, most likely occasioned by the unsanitariness of living in close quarters with lots of humans.) Machines won't be evil, machines won't try to replace us, because they're not even in the same niche as us. It would be like orange trees competing with polar bears.

  • by mekkab ( 133181 ) on Tuesday March 21, 2000 @07:24AM (#1186926) Homepage Journal
    Sorry to complain, but this sort of debate has been going on forever- people thought that the powers of radiation were going to either A) make it possible for the lone MAD SCIENTIST to destroy the entire world, or B) it would lead to a new era of peace and prosperity and we'd all be living in the WHITE CITY ON THE HILL.

    "Hey mekka, why all caps?"
    Becuase those are two images that have been culturally ingrained since the dawn of time...
    any history of science class worth it's weight in silicon introduces this in the first week of class. I'll draw the pattern out for you. 1-> new invention. 2a-> doomsayers predict it will destroy us 2b-> optimists predict it will liberate us 3-> reality is that with new progresses we have new responsibilities. By virtue of their being more to gain we also have more to lose. Automobiles get us there faster, but if not operated properly they can be dangerous and they can kill is. Repeat this example ad infinitum and that's that.

    It's a lot more concise than 11 pages. But I will admit, I am making an assumption that people who invent/create do try to think about the social implications.



    p.s.- searle's "chinese room" argument can be torn to shreds by any sophomore/junior philosophy major in a matter of seconds.
  • by Lazaru5 ( 28995 ) on Tuesday March 21, 2000 @08:11AM (#1186927)
    io% diff -u bar foo
    --- bar Tue Mar 21 11:11:19 2000
    +++ foo Tue Mar 21 11:11:03 2000
    @@ -1,6 +1,6 @@
    Concealed writes "There is an article in the new Wired which talks
    about the future of nanotechnology and 'intelligent machines.' Bill
    - Joy, (also the creator of the Linux text editor vi) who wrote the article,
    + Joy, (also the creator of the Unix text editor vi) who wrote the article,
    expresses his views on the neccesity of the human race in the near
    future. " From what I can gather this is the article that the Bill Joy on Extinction
    story was drawn from. Bill is a smart guy -- and this is well worth reading.

    And no admission on Slashdot/Hemos' part. Shame on you.
  • by Hellburner ( 127182 ) on Tuesday March 21, 2000 @07:16AM (#1186928)
    My one criticism of Joy's anaylsis was his disregard toward writer's of speculative / science fiction. Listening to Joy's interview last week on NPR, he basically stated that he had come to his doubt and uncertainty after "real" writers like Kurzweil had commented on the possible dangers of nanotech and runaway AI. So "fake" writers like Bear, Gibson, Benford and Brin---and I count at least three hard science PHDs there---they must lack the vision to make "real" speculative commentary on the future of emergent and possible technologies. They join the "fake" ranks of unreliables and nuts like Clarke and his silly comsat idea or Wells and his bizarre ideas concerning the proliferation of advanced tech weapons. And let's not mention that buffoon Jules Verne. I don't question Joy's own technical credentials. Nor do I necessarily disagree with his analysis. I simply found his discounting of spec.fic. writers as condescending and typical of the mundane society that can only catch up with a concept when its featured on Entertainment Tonight.
  • by gilroy ( 155262 ) on Tuesday March 21, 2000 @07:47AM (#1186929) Homepage Journal
    Who cares?

    Why do people feel so threatened? Each generation is "replaced" by the next. Yet few parents see their children as threats. In a healthy relationship, we not only fail to fear succession by our progeny, we actively encourage it. Everyone wants their kids to "go further" than they themselves did.

    Other than the utterly irrelevant fact that these descendants will be silicon and metal, not carbon and water, is there any difference? These AIs will be heirs to Plato and Descartes, Jefferson and King, just like we are. Unencumbered by two megayears of grungy evolution, they might even get it right. Does it matter that they are not "flesh of our flesh"? Why should flesh matter at all?

    Almost everyone seems to come to the brink of recognizing the commonality but then they veer away. What defines "humanity"? Is it really 46 chromosomes in a particular order? I argue instead that it is our intelligence that makes us special, our thinking ability. I won't get dragged into the old argument whether this means cold-blooded logic only or whether it includes human emotions (but I will say that I agree with the latter.) But no matter how you define it, no matter what features of human existence make us human, those features are not inextricably linked to our "ugly bags of mostly water".

    The greatest fear I have is not that we will be replaced. It's that short-sighted species-centric thinking will obscure, delay, or throw away the trans-historic opportunities we will have in the coming century.

  • by ucblockhead ( 63650 ) on Tuesday March 21, 2000 @07:17AM (#1186930) Homepage Journal
    ...but they will be our descendents.

    The problem here is the implication that one day, a bunch of humans, just like us, are suddenly going to find themselves obsolete, and either destroyed, or perhaps ignored, but some new, superintelligent entity that they created. But I don't see it happening that way.

    Instead, what we will see is a series of gradual changes. Genetically superior humans won't appear overnight. Instead, humans will be slowly made superior, genetically. Superintelligent robots won't suddenly appear. Instead, they will slowly improve, and around the same time, I firmly believe that hardware will start being connected to human brains and human limbs.

    So yes, in a thousand years, the rulers of this earth may not seem much like what we'd call human. But I'm willing to bet that if you looked over the period in between, you wouldn't see "humans" going extinct. You'd see a slow process of evolution (not darwinian, but directed) towards something greater. You'd never be able to find a dividing line between "human" and what's next.

    And while that may be frightening to some, it isn't really to me. We are "greater", at least in certain anthropomoprhic senses, than the ape-like creature that we are descended from. But that creature did not "go extinct". It evolved into us. Something is going to evolve from us. This doesn't necessarily mean that we're all going to die at the hands of some sort of "SkyNet" AI. It just means that we aren't the be-all and end-all of creation.

    The human race won't be supplanted by "homo superior". It will become "homo superior".

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...