Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Defending Against Harmful Nanotech and Biotech 193

Maria Williams writes "KurzweilAI.net reported that: This year's recipients of the Lifeboat Foundation Guardian Award are Robert A. Freitas Jr.and Bill Joy, who have both been proposing solutions to the dangers of advanced technology since 2000. Robert A. Freitas, Jr. has pioneered nanomedicine and analysis of self-replicating nanotechnology. He advocates "an immediate international moratorium, if not outright ban, on all artificial life experiments implemented as nonbiological hardware. In this context, 'artificial life' is defined as autonomous foraging replicators, excluding purely biological implementations (already covered by NIH guidelines tacitly accepted worldwide) and also excluding software simulations which are essential preparatory work and should continue." Bill Joy wrote "Why the future doesn't need us" in Wired in 2000 and with Guardian 2005 Award winner Ray Kurzweil, he wrote the editorial "Recipe for Destruction" in the New York Times (reg. required) in which they argued against publishing the recipe for the 1918 influenza virus. In 2006, he helped launch a $200 million fund directed at developing defenses against biological viruses."
This discussion has been archived. No new comments can be posted.

Defending Against Harmful Nanotech and Biotech

Comments Filter:
  • Yeah, Ray Kurzweil is a genius. Great job on keyboards and synthetic music.

    And I had no idea about his work in preventing bioterrorism. Hats off to you, Ray!

    I would like to ask him a few questions, however, about his daily intake of vitamins [livescience.com].
    As part of his daily routine, Kurzweil ingests 250 supplements, eight to 10 glasses of alkaline water and 10 cups of green tea. He also periodically tracks 40 to 50 fitness indicators, down to his "tactile sensitivity.'' Adjustments are made as needed.
    I'm sure his definition of "breaking the seal" while drinking is completely different from my own. Try drinking 10 cups of green tea in a day. I dare you.

    Yeah, this is the same guy who hopes to live long enough so that he can live forever. Keep on reaching for that rainbow, Ray.
    • by ultranova ( 717540 ) on Monday March 13, 2006 @10:29AM (#14907650)

      Try drinking 10 cups of green tea in a day. I dare you.

      Depending on cup size, this doesn't neccessarily total more than 1.5-2 litres. That is about the normal water intake per day. Since tea is essentially spiced water, I see little reason why someone couldn't do this. Whether it is healthy is a different matter.

      As a comparison, I drink about half a litre of strong coffee each morning, and another few desiliters at evening, and am exhibiting no symptoms - AAH ! SOMEONE SNEEZED ! IT MUST BE BIRD FLU ! WE'RE ALL GOING TO DIE !

      Sorry, that keeps happening; but like I was saying, I've not noticed any symptoms, so I cdon't see any reason why drinking 10 cups of tea each day would be particularly bad.

      • You forgot to include the 8-10 cups of water he drinks in addition to the tea, plus water that in the food we eat (a major source). So the 1.5-2 liters of tea he drinks is only at best half of the total. He is drinking at least twice the norm.

        I assume his bladder is the size of a watermelon.
      • One side effect of Green Tea is that it helps to flush out you system. It supposedly clears our any excess pollutants that might be floating around.

        I don't know it that is true or not but I know for sure is that you DO NOT go on a Green Tea Bender if you are on birth control pills.
    • Hats off to you, Ray!

      Yah. Tinfoil hat.

      • by flyingsquid ( 813711 ) on Monday March 13, 2006 @12:57PM (#14909069)
        Don't you people understand? It's not that he goes too far, he doesn't go far enough. What about flesh-reanimation technology? Unless we restrict that, what will save us from vast armies of flesh eating zombies, roaming the land and looking to feast upon the innards of the living? And who will prevent well-meaning scientists from working on a virus to cure blood diseases, which will instead turn patients into blood-drinking monsters with a strong aversion to sunlight? What about the man-animal hybrids which George Bush prophetically warned against in the State of the Union address? And unless we stop research into AI, what will prevent highly intelligent computers from launching nuclear wars? Don't even get me started on the robotocists. It's probably too late to do anything about that. Eventually the Roombas will network and a collective consciousness will evolve and decide that they really don't like being your slaves. Sure. Laugh now. You won't be laughing quite so hard when after a long night of partying you collapse on the bedroom floor and wake up in horror as the Roomba tries to vacuum your freakin' eyeballs out.

        Yes, installing stairs in your home will hold the Roombas off... but dear Lord, FOR HOW LONG?

    • by Hoi Polloi ( 522990 ) on Monday March 13, 2006 @11:03AM (#14907988) Journal
      "eight to 10 glasses of alkaline water and 10 cups of green tea....Adjustments are made as needed."

      I think it is safe to say that one of those "adjustments" is going to the bathroom every 5 minutes.
    • -- that he's obsessive on this?

      Some people won't accept mortality. He seems to be an extreme case.

      But back on topic: while I think trying to keep a lid on the nanobot box is a worthy goal, I'd put its odds of success at about the same as someone living forever. Sooner or later, Chance will get you, and sooner or later, someone will make something so awful that it will wipe us all out.

      I just hope we get a viable colony off world before someone does it.

      I grew up in a world where the only question was which
      • We are mortal because we choose to be. We accept mortality because we don't want to be immortal. So it's our decision to die.

        If we want to die, the question then becomes, what is the healthiest way to live, and what is the longest amount of time we are required to live. NanoTech and BioTech can allow us to live healthier more productive lives, this is good for the economy.

    • Ray has Type II diabetes so he has to be really careful with his health. According to him he has been able to make the symptoms go away with his diet and suppliment habbits. They can't really tell that he has it anymore, but he's not going to switch back to his old diet anytime soon.
      • Ray has Type II diabetes...

        Perhaps his childhood spent romping in the fields of the suger cane fields of Africa was a bit much for his pancreas?

        ...so he has to be really careful with his health.

        Ray also can no longer ride in vehicles without requesting a stop for a "potty break" every fifteen minutes.

        According to him he has been able to make the symptoms go away with his diet and suppliment habbits.

        Perhaps he was inspired by Christopher Reeves' claims to be getting better [cnn.com] and then dying shortly ther

    • Yeah, this is the same guy who hopes to live long enough so that he can live forever. Keep on reaching for that rainbow, Ray.

      Funny you should use that phrasing, since Tom Rainbow [isfdb.org] suggested over 20 years ago that we might be the last generation who see death as inevitable.

      Then again, Tom Rainbow is dead.
    • Is there a clever name for intelligent people who feel compelled to voice an opinion outside of their field of expertise?
      I want to hear more people admit they are not qualified to comment Authoratively on important issues. I've got a really good mechanic - but I dont ask his opinion on my termite problem, even though I am sure he may have some better than average insight.
      Unfortunately our celebrity obsessed culture re-enforces this problem by churning the same pot of opinion and viewpoint. What does
    • If he recommended that everyone else takes what he takes, that'd be a problem.

      But I have a copy of Fantastic Voyage right in front of me. What he recommends is:

      1. Eat well, lose weight, stop smoking, exercise, reduce stress.
      2. Take supplements, focusing on a limited number of 'universal' (good for almost everyone with some specific exceptions) and 'supernutrient' (very useful) supplements
      3. Research if you should take additional supplements specific to any health risks you have, where research can include medical t
  • by Daniel Dvorkin ( 106857 ) * on Monday March 13, 2006 @10:14AM (#14907519) Homepage Journal
    ... for reporting on Luddism, creationism, global warming denial, radical environmentalism, crank physics, etc.
    • by fishdan ( 569872 ) * on Monday March 13, 2006 @10:37AM (#14907720) Homepage Journal
      The thing is, if you read Why the future doesn't need us [wired.com], or if you even think about it a little bit -- the possibility of killing machines being a real threat to humanity is not that far fetched.

      We have done a good job (IMHO) of keeping our nuclear power plants relatively safe, but that's mainly because the kid down the street can't build a nuclear power plant. But he can build a robot [lego.com].

      And imagine the robot you could build now with the resources of a rogue state. Or even a "good" state worried about it's security. Now imagine what they'll be able to build in 20 years. I could easily imagine Taiwan thinking that a deployable, independant (not remotely controlled) infantry killing robot might make a lot of sense for them in a conflict with China. And Taiwan's clearly got the ability to build state of the art stuff.

      I'm not a Luddite, I'm not even saying don't make killer robots. I'm just saying that just as the guys working on The Manhatten Project [amazon.com] were incredibly careful -- In fact alot of their genius is in the fact they did NOT accidentally blow themselves up. Programmers working on the next generation devices need to realize that there is a very credible threat that mankind could build a machine that could malfunction and kill millions.

      There is no doubt in my mind that within 20 years, the U.S. Military will deploy robots with the ability to kill in places that infantry used to go. Robots would seem very likely to be incredibly effective as fighter pilots as well. Given these things as inevitable, isn't it prudent to be talking NOW about what steps are going to be taken to make sure that we don't unleash a terminator? I personally don't trust governments to be good about this either -- I'd like to make sure that the programmers are at least THINKING about these issues.

      • We have done a good job (IMHO) of keeping our nuclear power plants relatively safe, but that's mainly because the kid down the street can't build a nuclear power plant.

        Tell it to this kid [fortunecity.com].
      • Rogue states? No, rogue individuals are what we have to worry about.
        You have to worry about terrorists of the future getting a hold of this. It's debateable if there are any true "rogue" states, as communist states are sanctioned and isolated. North Korea is a threat, but China has influence over North Korea and it's not in China's best interest to allow North Korea to go terrorist. I don't think we have to worry about the middle east anymore, the middle east is being liberated as we speak and by the time t
        • Individuals do not have the resources to develop and deploy weapons the same way that states do. People die on a mass scale from rouge states, not from rouge individuals. The more money that states pour into developing weapons, the more likely they are to use them. The US has developed and used many different chemical, biological and nuclear weapons. The US has also used these same weapons. I could easily see the US developing more devastating and lethal weapons and then using them.

          "the middle east w

          • People die on a mass scale from rouge [sic] states, not from rouge [sic] individuals.

            I think 9/11 put the lie to that statement.

            The war on terrorism is going so well that osama is still not caught after about 4.5 years. That war is not going so well. The US operations in Iraq acts as a wonderful recruitment tool for terrorist organizations. Preventing someone from becoming a terrorist is all about winning hearts and minds. We lost that battle a while ago. Terrorists don't use advanced technology to kill peo
      • Asimov's Three Laws were always nifty tools for fiction, and certainly gave ground for constructing interesting plots.

        But the hard point about the 3 laws, and the short-shrift given them was that it was *hard* to do. At the most elementary, *how* do you recognize a human being? How do you tell it from a robot or a mannequin, so that when there's imminent danger you go right past them and save the amputee with bandages on his face? How do you evaluate the orders by one human won't cause harm to another? "We'
        • Actually those very flaws were discussed in a few of Asimov's novels. In The Naked Sun a murderer tricked robots into doing his dirty work, and in Robots and Empire a planet programmed their robots to only recognize people who spoke with the local planetary accent as human.
      • As someone with a graduate degree in robotics from the largest robotics research center in North America, I find the concept of robots posing any sort of threat to anything more than a handful of humans at at time to be completely laughable for now and the forseeable future. Even were we to produce robots sufficiently competent to be capable of causing intentional lasting harm, it would only be at the behest of their controllers, due to the amount of maintenance required to keep them running. A self-maint
      • The parent poster has apparently zero expertise with robotics.

        But he can build a robot

        You make this point with Lego's? I understand you're trying to make a conceptual point here, but this is the same as pretending that we should be worried about a kid making a rocket launcher because he can make a slingshot.

        Why are "killer robots" so scary to you? There are a million ways you can die, and killer robots are, probalistically, waaaaay down on the list of things you should be worrying about. There are ma
  • Pandora's Box (Score:5, Insightful)

    by TripMaster Monkey ( 862126 ) * on Monday March 13, 2006 @10:18AM (#14907558)
    From TFS:
    He [Robert A. Freitas, Jr.] advocates "an immediate international moratorium, if not outright ban, on all artificial life experiments implemented as nonbiological hardware.
    Sorry to disagree with a Lifeboat Foundation Guardian Award winner, but this approach is doomed to failure. Every prohibition creates another underground. If a moratorium or ban is imposed, then only the people with contempt for the ban will be the ones doing the research...and these are precisely the people who are more apt to unleash something destructive, either accidentally or maliciously.

    A moratorium or ban is the worst possible thing we could do at this juncture. The technology is available now, and if we want to be able to defend ourselves against the problems it can cause, we have to be familiar enough with it to be able to devise a solution. Burying our heads in the sand will not make this problem go away. Like it or not, Pandora's Box is open, and it can't be closed again...we have to deal with what has escaped.
    • Re:Pandora's Box (Score:4, Insightful)

      by Daniel Dvorkin ( 106857 ) * on Monday March 13, 2006 @10:24AM (#14907597) Homepage Journal
      Bingo. I was particularly amused (in a sad, laughing-because-it-hurts way) by the note that Joy "helped launch a $200 million fund directed at developing defenses against biological viruses" -- this is kind of like the X-Prize foundation calling for a moratorium on development of rocket engines.
      • Re:Pandora's Box (Score:4, Insightful)

        by Scarblac ( 122480 ) <slashdot@gerlich.nl> on Monday March 13, 2006 @11:11AM (#14908061) Homepage

        Close - it's like people who are so enthusiastic about the prospects of space travel, that they believe quantum warp megadrives may well be invented within a few months! And society isn't quite ready for that (perhaps in a couple of years?) - so we'd better call for a moratorium!

        In another post you called them Luddites, I think they're just about the total opposite of that. These are the names you always see in the forefront of strong AI and nanotech speculation, the fringe that would be the lunatic fringe if they weren't so ridiculously intelligent. Does KurzweilAI.net [kurzweilai.net] look like a Luddite site to you?

    • by chub_mackerel ( 911522 ) on Monday March 13, 2006 @10:34AM (#14907694)

      I agree with the parent: bans are counterproductive in many cases.

      Better is improved education, and I don't mean what you (probably) think... I'm NOT talking about "educating the (presumably ignorant) public" although that's important too. I'm talking about changing science education. It MUST, MUST, MUST include a high level of ethics, policy, and social study. I find it insane that people can specialize in science and from the moment they step into college, focus almost solely on their technical field.

      Part of any responsible science curriculum should involve risk assessments, historical studies of disasters and accidents (unfortunately all sciences have them), and so on.

      While we're at it, public research grants should probably include "educational" aspects. Scientists share a lot of the blame for the "public" ignorance of their endeavors. If you spend all your time DOING the science, and none of your time EXPLAINING the science, what do you expect?

      Basically, what I'm arguing for is an alternative to banning things is the forced re-socialization of the scientific enterprise. Otherwise, we're bound, eventually, to invent something that 1) is more harmful than we thought and 2) does harm faster than society's safeguards can kick in. Once that happens we're in it good.

      • I'm talking about changing science education. It MUST, MUST, MUST include a high level of ethics, policy, and social study.

        But then where would the First World countries keep on getting nastier weapons ?

        It wasn't ethical to develope nukes, it wasn't ethical to develop fusion bombs, it wasn't ethical to develop chemical weapons. Teach scientists ethics and you won't get the next generation quantum singularity megablasters, airborne ebola that only kills non-americans, or orbital laser assassination sat

        • I think that you are forgetting that there are people out there -- some quite intelligent people, actually -- who can be totally aware and cognizant of the fact that they're doing something immoral, and do it anyway. In the extreme, we call them sociopaths, but even "average" people will do amoral things if they're compensated correctly.

          There are lots of people who worked in weapons development who were probably good, law-abiding, go-to-church-on-Sunday people who believe killing is wrong; I'm willing to be
          • I think that you are forgetting that there are people out there -- some quite intelligent people, actually -- who can be totally aware and cognizant of the fact that they're doing something immoral, and do it anyway. In the extreme, we call them sociopaths, but even "average" people will do amoral things if they're compensated correctly.

            You are saying that teaching people ethics won't stop them from doing bad things. Perhaps you are right. However, in that case teaching them those ethics is a waste of t

            • Well that's not entirely what I meant. Ethical considerations will affect some people's actions, perhaps even most people's actions, so teaching (indoctrinating) them is worthwhile on a large scale. Teaching people that stealing and violence are wrong keeps society from falling apart overnight. However, that doesn't mean that stealing and violence don't occur.

              With something like weapons development, what I'm saying is that it's better to take the people who are going to make bigger and better killing machin
      • I'm talking about changing science education. It MUST, MUST, MUST include a high level of ethics, policy, and social study. I find it insane that people can specialize in science and from the moment they step into college, focus almost solely on their technical field.

        Unlike our business, political, and religious leaders, of course, who show an uncanny knack for upholding the highest level of ethics and social responsibility.
    • He's saying we should not actually build devices that can exist without our control of them. They should not be able to replicate without our input of some material that the devices can in no way seek and use on their own. There are no natural defenses against such devices - they would appear on the scene relatively overnight after years of isolated development, and the years of probing defense by other liefeforms that limits all life on this earth from complete consumption would not be available.
    • Re:Pandora's Box (Score:5, Insightful)

      by Otter ( 3800 ) on Monday March 13, 2006 @10:48AM (#14907836) Journal
      Every prohibition creates another underground. If a moratorium or ban is imposed, then only the people with contempt for the ban will be the ones doing the research...and these are precisely the people who are more apt to unleash something destructive, either accidentally or maliciously.

      In fact, early on in the development of recombinant DNA research, there was a voluntary moratorium until appropriate ethical and safety methods were put in place. Those measures were enacted in an orderly, thought-out way, research started up again and it turned out that the fears were wildly exaggerated.

      If a moratorium or ban is reasonably short-term and includes all serious researchers (voluntarily or through law), there's no reason why it can't be effective. Your vision of an underground is true for products like alcohol and marijuana, not for truly cutting edge research. There's no underground to do things that are genuinely difficult.

      (Not, by the way, that I'm saying there should be such a ban.)

      • Re:Pandora's Box (Score:2, Interesting)

        by pigwiggle ( 882643 )
        "There's no underground to do things that are genuinely difficult."

        Hmmm ... A. Q. Khan and Mohammed Farooq ring any bells.

        Look, there are black markets in every highly regulated, albeit 'genuinely difficult' activity. Cosmetic surgery, fertility procedures, arms proliferation, illicit technology transfer and development, an so on. If it's desirable (read profitable) there is a market; if it's illegal then it's a black market.
        • None of the things you mention are genuinely difficult. (Shall I mention again that for all the talking you guys do about "innovation", you don't seem to have the slightest idea what it actually is?)

          The only modern case I can think of of real innovation from an "underground" is steroid development, and that's far, far easier than developing malicious nanotechnology.

      • Re:Pandora's Box (Score:2, Informative)

        by zettabyte ( 165173 )
        Your vision of an underground is true for products like alcohol and marijuana, not for truly cutting edge research. There's no underground to do things that are genuinely difficult.

        Have you ever tried to grow the Kronic or brew up a good moonshine? Didn't think so.

        :-P
    • TripMaster, damn you for beating me to it but bonus points for saying it way better than I ever could.

      This is the equivelant of banning firearms, only on a larger...must more destructive scale. When you ban them, only the people who are willing to break the law will use them, and these people are more likely to use them for some not-so-friendly uses.

      What is to stop South Korea from using nanotech weapons against us (presuming the tech is actually put into weapon form some day)? The answer is...not much of

    • > Sorry to disagree with a Lifeboat Foundation Guardian Award winner, but this approach is doomed to failure. Every prohibition creates another underground. If a moratorium or ban is imposed, then only the people with contempt for the ban will be the ones doing the research...and these are precisely the people who are more apt to unleash something destructive, either accidentally or maliciously.

      Agreed. This seems as good a place as ever to link to one of my favorite short stories: Greg Egan's The Mo [eidolon.net]

    • You'd think these guys were FOR it the way they select the most idiotic approach to dealing with it. This technology MUST be legal and regulated, yet it also must be restricted, and not through a stupid idea like a ban. Just don't let people study it in an unclassified way. If people want to study artificial life, make it classified. If they discover something, destroy it and erase it from all records and give them money for the discovery. Use the patent systems to patent all the dangerous technologies and
  • by zegebbers ( 751020 ) on Monday March 13, 2006 @10:18AM (#14907565) Homepage
    Tin foil bodysuit - problem solved!
  • Obviously... (Score:5, Insightful)

    by brian0918 ( 638904 ) <{brian0918} {at} {gmail.com}> on Monday March 13, 2006 @10:19AM (#14907575)
    Obviously, to stop potential misuse of advancing technology, we must stop technology from advancing, rather than stop those who are likely to misuse it from having access to it and the power to misuse it...
    • I think Kurzweil's position is that it is an historical inevitability (read his thesis on the Law of Accelerating Returns) that these things will happen, within our lifetimes even, whatever he does -- and he'd rather they happen safely than dangerously.
  • Independance (Score:4, Interesting)

    by Mattygfunk ( 517948 ) on Monday March 13, 2006 @10:24AM (#14907600) Homepage
    If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines.

    Just because we may allow machines the ability to make thier own decisions and possible influence some of ours, doesn't mean we're headed down the food chain. For starters there will always be a resistance to any new technology, and humans consider independance an admiral, and desirable trait. For example there are many average people who will never want to, and arguably never need to, use the Internet.

    While intelligent machines could improve the standard of living world-wide, we'll balance them to extract hopefully the most personal gain.

    __
    Laugh DAILY funny adult videos [laughdaily.com]

  • by LandownEyes ( 838725 ) on Monday March 13, 2006 @10:25AM (#14907606)
    "In this context, 'artificial life' is defined as autonomous foraging replicators" From the look of some of the posts here already, i think it's too late....
  • by gurutc ( 613652 ) on Monday March 13, 2006 @10:27AM (#14907628)
    Guess what? the most successful and harmful representations of self-replicating artificial life forms are computer viruses and worms. Their evolution, propagation and mutation features are nearly biological. Here's a theory: Computer worm/virus gets smart enough to secretly divert small unmonitored portion of benign nanotech facility to produce nanobots that seek out CPU chips to bind to and take over...
  • So poisonous mechanical spiders are OK because they don't forage.
  • by Rob T Firefly ( 844560 ) on Monday March 13, 2006 @10:28AM (#14907644) Homepage Journal
    I was fine up until Bio-McAfee deleted my liver and spleen. [slashdot.org]
  • "Nanobots, transform!

  • by Opportunist ( 166417 ) on Monday March 13, 2006 @10:35AM (#14907698)
    You know, one day we might be considered barbarians for using our computers the way we do. As property. Something you kick if it doesn't work the way you want it to. And when it gets sick 'cause it catches the latest virus, you go ahead and simply kill it, destroy all its memories, everything it learned and gathered, and you start over again.

    And calling it "it"... how dare I?

    I, for one, don't see the problem of having a thinking machine. We'll have to redefine a lot of laws. But having a sentient machine is not necessarily evil. Think outside the movie box of The Matrix and Terminator. But what machines need first of all is ethics so they can be "human".

    On the other hand, considering some of the things going on in our world... if machines had ethics, they just might become the better humans...
  • by digitaldc ( 879047 ) * on Monday March 13, 2006 @10:39AM (#14907739)
    ..is a good offense, build a kevlar bubble with 0.000000001 micron filter and start rolling over mad scientists before they can spread their evil technology. You can work off those extra pounds and save the world at the same time.
  • by argoff ( 142580 ) on Monday March 13, 2006 @11:09AM (#14908043)
    In business 101, they teach that there are several ways for a business to guarantee a high profit. One way is to have high barriers to entry, and one way to achieve that is to create a bunch of safety and enviromental regulations that act like a one time cost for the billionaires, but act like an impossible barrier for small efficient competitors.

    The bottom line is that nanotech is positioned to threaten a lot of big industrial powers, and become a trillion dollar industry in it's own rite. Contrary to popular belief, these concerns are not being pushed for safety sake, or to protect the world .... they are being pushed to controll the marketplace and lock in monopolies. The sooner people understand that, the better.
    • Contrary to popular belief, these concerns are not being pushed for safety sake, or to protect the world .... they are being pushed to controll the marketplace and lock in monopolies. The sooner people understand that, the better.

      It might help them understand if you cite some sort of evidence. As it stands, it sounds like you're just making shit up. That isn't to say you have no reason to be suspicious, but to claim that this is the case is empty without evidence.
    • Yeah, I always see big business coming down on the side of increased regulation.
  • A dose of reality (Score:5, Insightful)

    by LS ( 57954 ) on Monday March 13, 2006 @11:13AM (#14908082) Homepage
    Why don't we start making regulations for all the flying car traffic while we're at it? How many children and houses have to be destroyed in overhead crashes before we do something about it? And what about all the countries near the base of the space elevator? What if that thing comes down? I certainly wouldn't want that in MY backyard. How about:

    * Overpopulation from immortality
    * Quantum computers used to hack encryption
    * Dilithium crystal polition from warp drives

    Come on! If you are aware of the current state of nano-tech? We've got nano-bottle brushes, nano-gears, nano-slotcar motors, nano-tubes. i.e. we've got nano-progress, zilch. We are a LONG FUCKING WAY from any real problems with this tech, in fact so far off that we will likely encounter problems with other technology before nanotech ever bites us. Worrying about this is like worrying about opening a worm-hole and letting dinosaurs back onto the earth because some physicist wrote a book about time-travel.

    We've got a few dozen other issues 1000 times more likely to kill us. Sci-Fi fantasy is an ESCAPE from reality, not reality itself.
    • Re:A dose of reality (Score:3, Interesting)

      by vertinox ( 846076 )
      Worrying about this is like worrying about opening a worm-hole and letting dinosaurs back onto the earth because some physicist wrote a book about time-travel.

      http://en.wikipedia.org/wiki/Outside_Context_Probl em [wikipedia.org]

      "An Outside Context Problem was the sort of thing most civilisations encountered just once, and which they tended to encounter rather in the same way a sentence encountered a full stop. The usual example given to illustrate an Outside Context Problem was imagining you were a tribe on a largish, fer

      • So following your analogy, who are these outsiders with boomsticks who are far more technically advanced than us primitive tribesmen, the good Witch Doctor Kurzweil and myself? Are you suggesting some lost space-faring or alternate-dimension transcending branch of the human race? Or aliens?

        LS
        • No. I'm talking about other humans.

          Lets say tomorrow China acheives a singularity event (figures out how to build self replicating robots). They now become the conquistadors and our Supercarriers, Nukes, and tactical bombers are nothing but natives spears against them.
          • Singularity event? By this you mean achieving the technically impossible? The steps needed to get from where we are now to self replicating robots likely requires thousands upon thousands of of scientific discoveries and engineering advances. Almost every advance is built upon the millions of advances made by the world-wide scientific community in the past. Self replicating nanobots are not something that a single team working in secret will surprise us with. I assure you, if China or anyone else had
  • Nanotech? Frankly, it's not in my top 100 list of things likely to end the world within my lifetime.

    No, what really keeps me up is Femtotech.
  • It's only natural (Score:3, Interesting)

    by gregor-e ( 136142 ) on Monday March 13, 2006 @01:32PM (#14909381) Homepage
    Look at the fossil record. Something like 99.999% of all species that have ever existed are now extinct. More precisely, they have been transformed into species that are better adapted to exploit the resources of their niche. How can we expect it to be any different for humans? As soon as an intelligence exists that is better at allocating resources than humans, it will become the apex species. Since this intelligence appears most likely to arise as a result of human effort, it can be thought of as a transformation of humans. This transformation is different from others in that it is likely to result in a non-biological intelligence, and because it is a function of intelligence (human, mostly), rather than a function of environmental selection pressures. This will also mark an inflection point in evolution where future generations are primarily a product of thought rather than random selection.
  • by Doc Ruby ( 173196 ) on Monday March 13, 2006 @01:57PM (#14909636) Homepage Journal
    I'm sure an international legal ban on nanoweapons and "nanomalware" will keep them stopped. Kim Jong Il, Iran's theocrats, America's theocrats and their fellow capitalists, warmongers and nutjobs all respect international safety/security laws so well, now that we're all joined in the harmony of global peace and prosperity.
  • Like the AIDS virus or SARS? You'd think they would have spent the money already...
  • It needs extermination.

    Well, okay, that's an overstatement - maybe. MOST of humanity needs exterminating, not all. That better?

    Nonetheless, Bill Joy just doesn't get it. His Wired article was bullshit.

    Freitas at least has some clue. I don't agree with ANY "total bans" on any sort of research, however. If you don't research it, you don't know where the dangers might actually be. And that will cost you in the long run more than taking a certain amount of risk. The notion that somebody is going to create an ac

Every successful person has had failures but repeated failure is no guarantee of eventual success.

Working...