Defending Against Harmful Nanotech and Biotech 193
Maria Williams writes "KurzweilAI.net reported that:
This year's recipients of the
Lifeboat Foundation Guardian Award are
Robert A. Freitas Jr.and
Bill Joy, who have both been proposing
solutions to the dangers of advanced technology since 2000.
Robert A. Freitas, Jr. has pioneered nanomedicine and analysis of self-replicating nanotechnology. He advocates "an immediate international moratorium, if not outright ban, on all artificial life experiments implemented as nonbiological hardware. In this context, 'artificial life' is defined as autonomous foraging replicators, excluding purely biological implementations (already covered by NIH guidelines tacitly accepted worldwide) and also excluding software simulations which are essential preparatory work and should continue."
Bill Joy wrote
"Why the future doesn't need us" in Wired in 2000 and with
Guardian 2005 Award winner Ray Kurzweil, he wrote the editorial
"Recipe for Destruction" in the New York Times (reg. required) in which they argued against publishing the recipe for the 1918 influenza virus. In 2006, he helped launch a
$200 million fund directed at developing defenses against
biological viruses."
Maybe /. needs an "Anti-Science" section ... (Score:4, Interesting)
Independance (Score:4, Interesting)
Just because we may allow machines the ability to make thier own decisions and possible influence some of ours, doesn't mean we're headed down the food chain. For starters there will always be a resistance to any new technology, and humans consider independance an admiral, and desirable trait. For example there are many average people who will never want to, and arguably never need to, use the Internet.
While intelligent machines could improve the standard of living world-wide, we'll balance them to extract hopefully the most personal gain.
__
Laugh DAILY funny adult videos [laughdaily.com]
Excluding Software Simulations (Score:5, Interesting)
Maybe Education is Better (Score:5, Interesting)
I agree with the parent: bans are counterproductive in many cases.
Better is improved education, and I don't mean what you (probably) think... I'm NOT talking about "educating the (presumably ignorant) public" although that's important too. I'm talking about changing science education. It MUST, MUST, MUST include a high level of ethics, policy, and social study. I find it insane that people can specialize in science and from the moment they step into college, focus almost solely on their technical field.
Part of any responsible science curriculum should involve risk assessments, historical studies of disasters and accidents (unfortunately all sciences have them), and so on.
While we're at it, public research grants should probably include "educational" aspects. Scientists share a lot of the blame for the "public" ignorance of their endeavors. If you spend all your time DOING the science, and none of your time EXPLAINING the science, what do you expect?
Basically, what I'm arguing for is an alternative to banning things is the forced re-socialization of the scientific enterprise. Otherwise, we're bound, eventually, to invent something that 1) is more harmful than we thought and 2) does harm faster than society's safeguards can kick in. Once that happens we're in it good.
Human rights for artificial lifeforms? (Score:4, Interesting)
And calling it "it"... how dare I?
I, for one, don't see the problem of having a thinking machine. We'll have to redefine a lot of laws. But having a sentient machine is not necessarily evil. Think outside the movie box of The Matrix and Terminator. But what machines need first of all is ethics so they can be "human".
On the other hand, considering some of the things going on in our world... if machines had ethics, they just might become the better humans...
Hmm.... computer religion (Score:3, Interesting)
So if a machine behaves correctly and it pleases its maker, it is more likely that he will create meaningful backups, because the machine is pleasing to him and he is glad it's running smoothly. Should it die for some reason, be it old hardware or an infection, he will more likely use his backup instead of redoing the machine from scratch...
Hinduism sure looks more appealing for computers than, say, Christianity. I mean, would you enjoy going to
There is nothing to "defend" against (Score:5, Interesting)
The bottom line is that nanotech is positioned to threaten a lot of big industrial powers, and become a trillion dollar industry in it's own rite. Contrary to popular belief, these concerns are not being pushed for safety sake, or to protect the world
Re:In summary... (Score:4, Interesting)
The machine would have to get enough energy, and enough raw materials, in more or less the right proportions, to do this. A general purpose eating machine would be so energetically expensive that it would stall before it could replicate. Life adapts itself to specific environments and foods because it's cheaper, and that makes the difference between life and death. Specific purpose life forms are efficient, and thrive in their ecological niche very well, but are no good outside of it. The closest thing to a general purpose life form, that can eat everything in its path, is us.
Not exactly nanoscopic, are we?
Re:Pandora's Box (Score:2, Interesting)
Hmmm
Look, there are black markets in every highly regulated, albeit 'genuinely difficult' activity. Cosmetic surgery, fertility procedures, arms proliferation, illicit technology transfer and development, an so on. If it's desirable (read profitable) there is a market; if it's illegal then it's a black market.
Re:You can call me Ray & you can call me Jay . (Score:2, Interesting)
The human body is a chemical factory at it's most basic level. Genetics (a system of chemicals memes) predisposes you to be more or less sensitive, intolerant, needy, etc. of certain chemicals to keep the factory operating correctly and efficiently. Why is it so hard to understand that someone has analyzed their specific bodily needs, taking into account general human body plan and personal genetics, to come up with his own personalized regimin of suppliments (read: suppliments, not food replacements) that by all tests and accounts, seems to be working? He's completely beaten his type II diabetes and genetically predisposed heart conditions. I doubt he'll have to worry about "dying of kidney failure at early age" since he's 56 and biological age tests put him at the body of a 40 year old.
Call me a Ray Kurzweil fanboy if you wish, but I'd rather be on the team of someone with a proven past and current success record.
Re:A dose of reality (Score:3, Interesting)
http://en.wikipedia.org/wiki/Outside_Context_Prob
Maybe we should invest in trying to invent gunpowder or better weapons... Or maybe ally ourselves with other tribes.
Ignoring the problem won't make the conquistadors go away.
It's only natural (Score:3, Interesting)
Can't We All Just Get Along? (Score:3, Interesting)
Re:You can call me Ray & you can call me Jay . (Score:3, Interesting)
I don't know it that is true or not but I know for sure is that you DO NOT go on a Green Tea Bender if you are on birth control pills.
What he *recommends* is what matters (Score:3, Interesting)
But I have a copy of Fantastic Voyage right in front of me. What he recommends is:
But checking my own multivitamin- it has 25 items listed, because it details each of the B vitamins and each of the minerals. Technically then I'm taking 25 supplements a day, but it doesn't mean I'm taking 25 pillls a day.
Re:Human rights for artificial lifeforms? (Score:3, Interesting)
We routinely mistreat animals in ways that are almost too horrible to describe. I'm not even talking about killing them for meat or similar products; but we kill them brutally, slowly, and painfully [umweltjournal.de], we kill them just for the fun of it [wikipedia.org], for the perverse pleasure of having absolute power over another being, and in fact, we have driven thousands of entire species to extinction already, and will most likely do the same to several thousand more.
Human rights for sentient computers are fine and dandy. But shouldn't we solve the problems we already have in today's world before we think the problems that would arise in a hypothetical future that may or may not ever come?