Forgot your password?
typodupeerror

Defending Against Harmful Nanotech and Biotech 193

Posted by Roblimo
from the something-wicked-this-way-comes dept.
Maria Williams writes "KurzweilAI.net reported that: This year's recipients of the Lifeboat Foundation Guardian Award are Robert A. Freitas Jr.and Bill Joy, who have both been proposing solutions to the dangers of advanced technology since 2000. Robert A. Freitas, Jr. has pioneered nanomedicine and analysis of self-replicating nanotechnology. He advocates "an immediate international moratorium, if not outright ban, on all artificial life experiments implemented as nonbiological hardware. In this context, 'artificial life' is defined as autonomous foraging replicators, excluding purely biological implementations (already covered by NIH guidelines tacitly accepted worldwide) and also excluding software simulations which are essential preparatory work and should continue." Bill Joy wrote "Why the future doesn't need us" in Wired in 2000 and with Guardian 2005 Award winner Ray Kurzweil, he wrote the editorial "Recipe for Destruction" in the New York Times (reg. required) in which they argued against publishing the recipe for the 1918 influenza virus. In 2006, he helped launch a $200 million fund directed at developing defenses against biological viruses."
This discussion has been archived. No new comments can be posted.

Defending Against Harmful Nanotech and Biotech

Comments Filter:
  • by Daniel Dvorkin (106857) * on Monday March 13, 2006 @10:14AM (#14907519) Homepage Journal
    ... for reporting on Luddism, creationism, global warming denial, radical environmentalism, crank physics, etc.
  • Independance (Score:4, Interesting)

    by Mattygfunk (517948) on Monday March 13, 2006 @10:24AM (#14907600) Homepage
    If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines.

    Just because we may allow machines the ability to make thier own decisions and possible influence some of ours, doesn't mean we're headed down the food chain. For starters there will always be a resistance to any new technology, and humans consider independance an admiral, and desirable trait. For example there are many average people who will never want to, and arguably never need to, use the Internet.

    While intelligent machines could improve the standard of living world-wide, we'll balance them to extract hopefully the most personal gain.

    __
    Laugh DAILY funny adult videos [laughdaily.com]

  • by gurutc (613652) on Monday March 13, 2006 @10:27AM (#14907628)
    Guess what? the most successful and harmful representations of self-replicating artificial life forms are computer viruses and worms. Their evolution, propagation and mutation features are nearly biological. Here's a theory: Computer worm/virus gets smart enough to secretly divert small unmonitored portion of benign nanotech facility to produce nanobots that seek out CPU chips to bind to and take over...
  • by chub_mackerel (911522) on Monday March 13, 2006 @10:34AM (#14907694)

    I agree with the parent: bans are counterproductive in many cases.

    Better is improved education, and I don't mean what you (probably) think... I'm NOT talking about "educating the (presumably ignorant) public" although that's important too. I'm talking about changing science education. It MUST, MUST, MUST include a high level of ethics, policy, and social study. I find it insane that people can specialize in science and from the moment they step into college, focus almost solely on their technical field.

    Part of any responsible science curriculum should involve risk assessments, historical studies of disasters and accidents (unfortunately all sciences have them), and so on.

    While we're at it, public research grants should probably include "educational" aspects. Scientists share a lot of the blame for the "public" ignorance of their endeavors. If you spend all your time DOING the science, and none of your time EXPLAINING the science, what do you expect?

    Basically, what I'm arguing for is an alternative to banning things is the forced re-socialization of the scientific enterprise. Otherwise, we're bound, eventually, to invent something that 1) is more harmful than we thought and 2) does harm faster than society's safeguards can kick in. Once that happens we're in it good.

  • by Opportunist (166417) on Monday March 13, 2006 @10:35AM (#14907698)
    You know, one day we might be considered barbarians for using our computers the way we do. As property. Something you kick if it doesn't work the way you want it to. And when it gets sick 'cause it catches the latest virus, you go ahead and simply kill it, destroy all its memories, everything it learned and gathered, and you start over again.

    And calling it "it"... how dare I?

    I, for one, don't see the problem of having a thinking machine. We'll have to redefine a lot of laws. But having a sentient machine is not necessarily evil. Think outside the movie box of The Matrix and Terminator. But what machines need first of all is ethics so they can be "human".

    On the other hand, considering some of the things going on in our world... if machines had ethics, they just might become the better humans...
  • by Opportunist (166417) on Monday March 13, 2006 @10:51AM (#14907869)
    Let's spin this a bit more... So imagine an artificial life form. Not knowing about its maker (for some reason or another), connected to the others with some kind of network, so they can interact.

    So if a machine behaves correctly and it pleases its maker, it is more likely that he will create meaningful backups, because the machine is pleasing to him and he is glad it's running smoothly. Should it die for some reason, be it old hardware or an infection, he will more likely use his backup instead of redoing the machine from scratch...

    Hinduism sure looks more appealing for computers than, say, Christianity. I mean, would you enjoy going to /dev/null after your final calculation?
  • by argoff (142580) on Monday March 13, 2006 @11:09AM (#14908043)
    In business 101, they teach that there are several ways for a business to guarantee a high profit. One way is to have high barriers to entry, and one way to achieve that is to create a bunch of safety and enviromental regulations that act like a one time cost for the billionaires, but act like an impossible barrier for small efficient competitors.

    The bottom line is that nanotech is positioned to threaten a lot of big industrial powers, and become a trillion dollar industry in it's own rite. Contrary to popular belief, these concerns are not being pushed for safety sake, or to protect the world .... they are being pushed to controll the marketplace and lock in monopolies. The sooner people understand that, the better.
  • Re:In summary... (Score:4, Interesting)

    by Thangodin (177516) <elentar@syPARISmpatico.ca minus city> on Monday March 13, 2006 @11:26AM (#14908200) Homepage
    This whole grey goo scare is just bad science fiction. A machine that goes on replicating forever, eating everything in its path? If that were possible, don't you think that evolution would have come up with it already?

    The machine would have to get enough energy, and enough raw materials, in more or less the right proportions, to do this. A general purpose eating machine would be so energetically expensive that it would stall before it could replicate. Life adapts itself to specific environments and foods because it's cheaper, and that makes the difference between life and death. Specific purpose life forms are efficient, and thrive in their ecological niche very well, but are no good outside of it. The closest thing to a general purpose life form, that can eat everything in its path, is us.

    Not exactly nanoscopic, are we?
  • Re:Pandora's Box (Score:2, Interesting)

    by pigwiggle (882643) on Monday March 13, 2006 @12:16PM (#14908705) Homepage
    "There's no underground to do things that are genuinely difficult."

    Hmmm ... A. Q. Khan and Mohammed Farooq ring any bells.

    Look, there are black markets in every highly regulated, albeit 'genuinely difficult' activity. Cosmetic surgery, fertility procedures, arms proliferation, illicit technology transfer and development, an so on. If it's desirable (read profitable) there is a market; if it's illegal then it's a black market.
  • Amazing all these armchair chemists ridiculing someone who's done actual research on how the chemical factory that is our bodies works and finding out what chemicals we need and how much. Have you read his Fantastic Voyage? Have you even heard of it?

    The human body is a chemical factory at it's most basic level. Genetics (a system of chemicals memes) predisposes you to be more or less sensitive, intolerant, needy, etc. of certain chemicals to keep the factory operating correctly and efficiently. Why is it so hard to understand that someone has analyzed their specific bodily needs, taking into account general human body plan and personal genetics, to come up with his own personalized regimin of suppliments (read: suppliments, not food replacements) that by all tests and accounts, seems to be working? He's completely beaten his type II diabetes and genetically predisposed heart conditions. I doubt he'll have to worry about "dying of kidney failure at early age" since he's 56 and biological age tests put him at the body of a 40 year old.

    Call me a Ray Kurzweil fanboy if you wish, but I'd rather be on the team of someone with a proven past and current success record.

  • Re:A dose of reality (Score:3, Interesting)

    by vertinox (846076) on Monday March 13, 2006 @12:36PM (#14908865)
    Worrying about this is like worrying about opening a worm-hole and letting dinosaurs back onto the earth because some physicist wrote a book about time-travel.

    http://en.wikipedia.org/wiki/Outside_Context_Probl em [wikipedia.org]
    "An Outside Context Problem was the sort of thing most civilisations encountered just once, and which they tended to encounter rather in the same way a sentence encountered a full stop. The usual example given to illustrate an Outside Context Problem was imagining you were a tribe on a largish, fertile island; you'd tamed the land, invented the wheel or writing or whatever, the neighbours were cooperative or enslaved but at any rate peaceful and you were busy raising temples to yourself with all the excess productive capacity you had, you were in a position of near-absolute power and control which your hallowed ancestors could hardly have dreamed of and the whole situation was just running along nicely like a canoe on wet grass... when suddenly this bristling lump of iron appears sailless and trailing steam in the bay and these guys carrying long funny-looking sticks come ashore and announce you've just been discovered, you're all subjects of the Emperor now, he's keen on presents called tax and these bright-eyed holy men would like a word with your priests."
    Basically you are a tribesman who just told the witchdoctor he was mad because he felt that is might be possible (based on knowledge from other tribes and research) that whitemen with boom sticks might show up one day and deliver world of hurt to our way of life. We could get back to worrying about next seasons crop, but that won't make a hill of difference if these things did happen.

    Maybe we should invest in trying to invent gunpowder or better weapons... Or maybe ally ourselves with other tribes.

    Ignoring the problem won't make the conquistadors go away.
  • It's only natural (Score:3, Interesting)

    by gregor-e (136142) on Monday March 13, 2006 @01:32PM (#14909381) Homepage
    Look at the fossil record. Something like 99.999% of all species that have ever existed are now extinct. More precisely, they have been transformed into species that are better adapted to exploit the resources of their niche. How can we expect it to be any different for humans? As soon as an intelligence exists that is better at allocating resources than humans, it will become the apex species. Since this intelligence appears most likely to arise as a result of human effort, it can be thought of as a transformation of humans. This transformation is different from others in that it is likely to result in a non-biological intelligence, and because it is a function of intelligence (human, mostly), rather than a function of environmental selection pressures. This will also mark an inflection point in evolution where future generations are primarily a product of thought rather than random selection.
  • by Doc Ruby (173196) on Monday March 13, 2006 @01:57PM (#14909636) Homepage Journal
    I'm sure an international legal ban on nanoweapons and "nanomalware" will keep them stopped. Kim Jong Il, Iran's theocrats, America's theocrats and their fellow capitalists, warmongers and nutjobs all respect international safety/security laws so well, now that we're all joined in the harmony of global peace and prosperity.
  • by Red Weasel (166333) on Monday March 13, 2006 @02:29PM (#14909941) Homepage
    One side effect of Green Tea is that it helps to flush out you system. It supposedly clears our any excess pollutants that might be floating around.

    I don't know it that is true or not but I know for sure is that you DO NOT go on a Green Tea Bender if you are on birth control pills.
  • by geekotourist (80163) on Monday March 13, 2006 @05:09PM (#14911253) Journal
    If he recommended that everyone else takes what he takes, that'd be a problem.

    But I have a copy of Fantastic Voyage right in front of me. What he recommends is:

    1. Eat well, lose weight, stop smoking, exercise, reduce stress.
    2. Take supplements, focusing on a limited number of 'universal' (good for almost everyone with some specific exceptions) and 'supernutrient' (very useful) supplements
    3. Research if you should take additional supplements specific to any health risks you have, where research can include medical tests, genetic tests, and family history, and then take those extra supplements, and
    4. Plan to update your supplement list as better information comes out through your personal medical tests or through medical research. For example, recommendations can change as studies on supplements are completed, as when a study found that beta carotene is dangerous to smokers and those with lung cancer.
    His recommended list of supplements is fairly short. The 'universal' supplements are vitamins plus minerals (except iron)... what you can find in a single good multivitamin. Then there are 6 'supernutrients:' antioxidants and omega-3-fatty-acids. 7 pills if you take them once a day.

    But checking my own multivitamin- it has 25 items listed, because it details each of the B vitamins and each of the minerals. Technically then I'm taking 25 supplements a day, but it doesn't mean I'm taking 25 pillls a day.

  • by slavemowgli (585321) on Monday March 13, 2006 @09:25PM (#14912861) Homepage
    Maybe it's just me, but before we think about human rights for hypothetical sentient computers... shouldn't we think about human rights for animals? Not all of them make sense, of course (the right to an education, for example, or freedom of religion etc., or freedom of expression and opinion), but others do.

    We routinely mistreat animals in ways that are almost too horrible to describe. I'm not even talking about killing them for meat or similar products; but we kill them brutally, slowly, and painfully [umweltjournal.de], we kill them just for the fun of it [wikipedia.org], for the perverse pleasure of having absolute power over another being, and in fact, we have driven thousands of entire species to extinction already, and will most likely do the same to several thousand more.

    Human rights for sentient computers are fine and dandy. But shouldn't we solve the problems we already have in today's world before we think the problems that would arise in a hypothetical future that may or may not ever come?

Hackers of the world, unite!

Working...