Defending Against Harmful Nanotech and Biotech 193
Maria Williams writes "KurzweilAI.net reported that:
This year's recipients of the
Lifeboat Foundation Guardian Award are
Robert A. Freitas Jr.and
Bill Joy, who have both been proposing
solutions to the dangers of advanced technology since 2000.
Robert A. Freitas, Jr. has pioneered nanomedicine and analysis of self-replicating nanotechnology. He advocates "an immediate international moratorium, if not outright ban, on all artificial life experiments implemented as nonbiological hardware. In this context, 'artificial life' is defined as autonomous foraging replicators, excluding purely biological implementations (already covered by NIH guidelines tacitly accepted worldwide) and also excluding software simulations which are essential preparatory work and should continue."
Bill Joy wrote
"Why the future doesn't need us" in Wired in 2000 and with
Guardian 2005 Award winner Ray Kurzweil, he wrote the editorial
"Recipe for Destruction" in the New York Times (reg. required) in which they argued against publishing the recipe for the 1918 influenza virus. In 2006, he helped launch a
$200 million fund directed at developing defenses against
biological viruses."
You can call me Ray & you can call me Jay ... (Score:4, Funny)
And I had no idea about his work in preventing bioterrorism. Hats off to you, Ray!
I would like to ask him a few questions, however, about his daily intake of vitamins [livescience.com]. I'm sure his definition of "breaking the seal" while drinking is completely different from my own. Try drinking 10 cups of green tea in a day. I dare you.
Yeah, this is the same guy who hopes to live long enough so that he can live forever. Keep on reaching for that rainbow, Ray.
Re:You can call me Ray & you can call me Jay . (Score:4, Funny)
Depending on cup size, this doesn't neccessarily total more than 1.5-2 litres. That is about the normal water intake per day. Since tea is essentially spiced water, I see little reason why someone couldn't do this. Whether it is healthy is a different matter.
As a comparison, I drink about half a litre of strong coffee each morning, and another few desiliters at evening, and am exhibiting no symptoms - AAH ! SOMEONE SNEEZED ! IT MUST BE BIRD FLU ! WE'RE ALL GOING TO DIE !
Sorry, that keeps happening; but like I was saying, I've not noticed any symptoms, so I cdon't see any reason why drinking 10 cups of tea each day would be particularly bad.
Re:You can call me Ray & you can call me Jay . (Score:2)
I assume his bladder is the size of a watermelon.
Re:You can call me Ray & you can call me Jay . (Score:3, Interesting)
I don't know it that is true or not but I know for sure is that you DO NOT go on a Green Tea Bender if you are on birth control pills.
Re:You can call me Ray & you can call me Jay . (Score:2)
That's good - because he'll need it with all the chemical/herbal supplements he's taking. I wonder how often he gets a liver function test, and what those numbers might look like.
Re:You can call me Ray & you can call me Jay . (Score:3, Funny)
Hats off to you, Ray!
Yah. Tinfoil hat.
Re:You can call me Ray & you can call me Jay . (Score:4, Funny)
Yes, installing stairs in your home will hold the Roombas off... but dear Lord, FOR HOW LONG?
Adjustments are made as needed (Score:4, Funny)
I think it is safe to say that one of those "adjustments" is going to the bathroom every 5 minutes.
What are you saying (Score:2)
Some people won't accept mortality. He seems to be an extreme case.
But back on topic: while I think trying to keep a lid on the nanobot box is a worthy goal, I'd put its odds of success at about the same as someone living forever. Sooner or later, Chance will get you, and sooner or later, someone will make something so awful that it will wipe us all out.
I just hope we get a viable colony off world before someone does it.
I grew up in a world where the only question was which
Mortality is a choice like anything else (Score:2)
If we want to die, the question then becomes, what is the healthiest way to live, and what is the longest amount of time we are required to live. NanoTech and BioTech can allow us to live healthier more productive lives, this is good for the economy.
You underestimate (Score:3, Insightful)
Sooner or later, all numbers come up.
Re:Ray has Type II diabetes (Score:2)
More about Ray's health... (Score:3, Funny)
Perhaps his childhood spent romping in the fields of the suger cane fields of Africa was a bit much for his pancreas?
Ray also can no longer ride in vehicles without requesting a stop for a "potty break" every fifteen minutes.
Perhaps he was inspired by Christopher Reeves' claims to be getting better [cnn.com] and then dying shortly ther
Re:You can call me Ray & you can call me Jay . (Score:2)
Funny you should use that phrasing, since Tom Rainbow [isfdb.org] suggested over 20 years ago that we might be the last generation who see death as inevitable.
Then again, Tom Rainbow is dead.
Re:You can call me Ray & you can call me Jay . (Score:3, Insightful)
I want to hear more people admit they are not qualified to comment Authoratively on important issues. I've got a really good mechanic - but I dont ask his opinion on my termite problem, even though I am sure he may have some better than average insight.
Unfortunately our celebrity obsessed culture re-enforces this problem by churning the same pot of opinion and viewpoint. What does
What he *recommends* is what matters (Score:3, Interesting)
But I have a copy of Fantastic Voyage right in front of me. What he recommends is:
Re:You can call me Ray & you can call me Jay . (Score:2)
that being said, i take amino acids daily*, and omega-3s when my diet is low in fish/flax, and take half a 'one a day' multi vitamin 2-3 times a week.
when i take too many pills, my urine takes on a deep yellow to yellow green hue, beats peeing clear water as i do when ingesting too many caffineated drinks (what does the body do with all the 'black' in cola is it all from carbon?)
The human body is highly adapatable, it can susrvive on
You are out of your mind (Score:2)
The human body can adapt, but if you don't consume any vitamins at all, you age quicker. I think the point he is making is we DO need vitamins. It's debateable if these vitamins should be in the form of pills instead of food, but considering how the food industry is headed, we all will be living off artificial food in the future anyway. So we can either die of kidney failure or a heart attack. We can either eat
Re:You are out of your mind (Score:2)
So does this mean I can drink a beer with breakfast, and tell everyone who stares at me that it's for my health?
Re:You can call me Ray & you can call me Jay . (Score:2, Interesting)
The human body is a chemical factory at it's most basic level. Genetics (a system of chemicals memes) predisposes you to be more or less sensitive, intolerant, needy, etc. of certain chemicals to keep the factory operating correctly and efficiently. Why is it s
Re:Bans won't, can't, and never will work. (Score:2)
Sorry but your argument simply doesn't hold up. The market will go wherever it is most profitable to go... this has always been true and always will be true. Just look at some very succesful [boeing.com] companies [lockheedmartin.com] and tell me there's no profit in killing people.
The market is least trustworthy option when it comes to policing.
Re:Ray Calls it "The Singularity" (Score:2, Informative)
Re:The sigularity is real (Score:2)
Don't worry, though - I would never release nanobot assemblers without replication limiting code.
Maybe /. needs an "Anti-Science" section ... (Score:4, Interesting)
Saying "be careful" is not anti-science (Score:5, Insightful)
We have done a good job (IMHO) of keeping our nuclear power plants relatively safe, but that's mainly because the kid down the street can't build a nuclear power plant. But he can build a robot [lego.com].
And imagine the robot you could build now with the resources of a rogue state. Or even a "good" state worried about it's security. Now imagine what they'll be able to build in 20 years. I could easily imagine Taiwan thinking that a deployable, independant (not remotely controlled) infantry killing robot might make a lot of sense for them in a conflict with China. And Taiwan's clearly got the ability to build state of the art stuff.
I'm not a Luddite, I'm not even saying don't make killer robots. I'm just saying that just as the guys working on The Manhatten Project [amazon.com] were incredibly careful -- In fact alot of their genius is in the fact they did NOT accidentally blow themselves up. Programmers working on the next generation devices need to realize that there is a very credible threat that mankind could build a machine that could malfunction and kill millions.
There is no doubt in my mind that within 20 years, the U.S. Military will deploy robots with the ability to kill in places that infantry used to go. Robots would seem very likely to be incredibly effective as fighter pilots as well. Given these things as inevitable, isn't it prudent to be talking NOW about what steps are going to be taken to make sure that we don't unleash a terminator? I personally don't trust governments to be good about this either -- I'd like to make sure that the programmers are at least THINKING about these issues.
The Radioactive Boy Scout (Score:2)
Tell it to this kid [fortunecity.com].
You are so last century. (Score:3, Insightful)
You have to worry about terrorists of the future getting a hold of this. It's debateable if there are any true "rogue" states, as communist states are sanctioned and isolated. North Korea is a threat, but China has influence over North Korea and it's not in China's best interest to allow North Korea to go terrorist. I don't think we have to worry about the middle east anymore, the middle east is being liberated as we speak and by the time t
Re:You are so last century. (Score:3, Insightful)
"the middle east w
Re:You are so last century. (Score:2)
I think 9/11 put the lie to that statement.
The war on terrorism is going so well that osama is still not caught after about 4.5 years. That war is not going so well. The US operations in Iraq acts as a wonderful recruitment tool for terrorist organizations. Preventing someone from becoming a terrorist is all about winning hearts and minds. We lost that battle a while ago. Terrorists don't use advanced technology to kill peo
Asimov and killing robots (Score:3, Insightful)
But the hard point about the 3 laws, and the short-shrift given them was that it was *hard* to do. At the most elementary, *how* do you recognize a human being? How do you tell it from a robot or a mannequin, so that when there's imminent danger you go right past them and save the amputee with bandages on his face? How do you evaluate the orders by one human won't cause harm to another? "We'
Re:Asimov and killing robots (Score:2)
Re:Asimov and killing robots (Score:2)
Robots are not a credible threat at present. (Score:3, Insightful)
Answer Requested (Score:2)
Re:Saying "be careful" is not anti-science (Score:2)
But he can build a robot
You make this point with Lego's? I understand you're trying to make a conceptual point here, but this is the same as pretending that we should be worried about a kid making a rocket launcher because he can make a slingshot.
Why are "killer robots" so scary to you? There are a million ways you can die, and killer robots are, probalistically, waaaaay down on the list of things you should be worrying about. There are ma
Re:Saying "be careful" is not anti-science (Score:2)
Re:Saying "be careful" is not anti-science (Score:2)
Pandora's Box (Score:5, Insightful)
A moratorium or ban is the worst possible thing we could do at this juncture. The technology is available now, and if we want to be able to defend ourselves against the problems it can cause, we have to be familiar enough with it to be able to devise a solution. Burying our heads in the sand will not make this problem go away. Like it or not, Pandora's Box is open, and it can't be closed again...we have to deal with what has escaped.
Re:Pandora's Box (Score:4, Insightful)
Re:Pandora's Box (Score:4, Insightful)
Close - it's like people who are so enthusiastic about the prospects of space travel, that they believe quantum warp megadrives may well be invented within a few months! And society isn't quite ready for that (perhaps in a couple of years?) - so we'd better call for a moratorium!
In another post you called them Luddites, I think they're just about the total opposite of that. These are the names you always see in the forefront of strong AI and nanotech speculation, the fringe that would be the lunatic fringe if they weren't so ridiculously intelligent. Does KurzweilAI.net [kurzweilai.net] look like a Luddite site to you?
Maybe Education is Better (Score:5, Interesting)
I agree with the parent: bans are counterproductive in many cases.
Better is improved education, and I don't mean what you (probably) think... I'm NOT talking about "educating the (presumably ignorant) public" although that's important too. I'm talking about changing science education. It MUST, MUST, MUST include a high level of ethics, policy, and social study. I find it insane that people can specialize in science and from the moment they step into college, focus almost solely on their technical field.
Part of any responsible science curriculum should involve risk assessments, historical studies of disasters and accidents (unfortunately all sciences have them), and so on.
While we're at it, public research grants should probably include "educational" aspects. Scientists share a lot of the blame for the "public" ignorance of their endeavors. If you spend all your time DOING the science, and none of your time EXPLAINING the science, what do you expect?
Basically, what I'm arguing for is an alternative to banning things is the forced re-socialization of the scientific enterprise. Otherwise, we're bound, eventually, to invent something that 1) is more harmful than we thought and 2) does harm faster than society's safeguards can kick in. Once that happens we're in it good.
Re:Maybe Education is Better (Score:2)
But then where would the First World countries keep on getting nastier weapons ?
It wasn't ethical to develope nukes, it wasn't ethical to develop fusion bombs, it wasn't ethical to develop chemical weapons. Teach scientists ethics and you won't get the next generation quantum singularity megablasters, airborne ebola that only kills non-americans, or orbital laser assassination sat
Re:Maybe Education is Better (Score:3, Insightful)
There are lots of people who worked in weapons development who were probably good, law-abiding, go-to-church-on-Sunday people who believe killing is wrong; I'm willing to be
Re:Maybe Education is Better (Score:2)
You are saying that teaching people ethics won't stop them from doing bad things. Perhaps you are right. However, in that case teaching them those ethics is a waste of t
Re:Maybe Education is Better (Score:3, Insightful)
With something like weapons development, what I'm saying is that it's better to take the people who are going to make bigger and better killing machin
Re:Maybe Education is Better (Score:2)
And so on. I've only quoted part of your post since the rest is simply restating the same over and over again.
My position was that it is unlikely that scientists will be taught ethics that
Re:Maybe Education is Better (Score:2)
Unlike our business, political, and religious leaders, of course, who show an uncanny knack for upholding the highest level of ethics and social responsibility.
Re:Pandora's Box (Score:2)
Re:Pandora's Box (Score:5, Insightful)
In fact, early on in the development of recombinant DNA research, there was a voluntary moratorium until appropriate ethical and safety methods were put in place. Those measures were enacted in an orderly, thought-out way, research started up again and it turned out that the fears were wildly exaggerated.
If a moratorium or ban is reasonably short-term and includes all serious researchers (voluntarily or through law), there's no reason why it can't be effective. Your vision of an underground is true for products like alcohol and marijuana, not for truly cutting edge research. There's no underground to do things that are genuinely difficult.
(Not, by the way, that I'm saying there should be such a ban.)
Re:Pandora's Box (Score:2, Interesting)
Hmmm
Look, there are black markets in every highly regulated, albeit 'genuinely difficult' activity. Cosmetic surgery, fertility procedures, arms proliferation, illicit technology transfer and development, an so on. If it's desirable (read profitable) there is a market; if it's illegal then it's a black market.
Re:Pandora's Box (Score:2)
The only modern case I can think of of real innovation from an "underground" is steroid development, and that's far, far easier than developing malicious nanotechnology.
Re:Pandora's Box (Score:2, Informative)
Have you ever tried to grow the Kronic or brew up a good moonshine? Didn't think so.
Re:Pandora's Box (Score:2)
This is the equivelant of banning firearms, only on a larger...must more destructive scale. When you ban them, only the people who are willing to break the law will use them, and these people are more likely to use them for some not-so-friendly uses.
What is to stop South Korea from using nanotech weapons against us (presuming the tech is actually put into weapon form some day)? The answer is...not much of
Re:Pandora's Box (Score:2)
They've already gotten to me....North Korea tried to takeover my mind with nano-probes and tried to get me to blame South Korea.....but....I...am....fighti&&
The Moral Virologist. (Score:2)
Agreed. This seems as good a place as ever to link to one of my favorite short stories: Greg Egan's The Mo [eidolon.net]
Exactly the point. (Score:2)
Three words... (Score:5, Funny)
Re:Three words... (Score:5, Funny)
Don't fall for it! (Score:3, Funny)
Our friends at MIT have shown that tin foil hats enhance reception of government transmitters [mit.edu].
I shudder to think what a whole body suit could do!
Obviously... (Score:5, Insightful)
Re:Obviously... (Score:2)
Independance (Score:4, Interesting)
Just because we may allow machines the ability to make thier own decisions and possible influence some of ours, doesn't mean we're headed down the food chain. For starters there will always be a resistance to any new technology, and humans consider independance an admiral, and desirable trait. For example there are many average people who will never want to, and arguably never need to, use the Internet.
While intelligent machines could improve the standard of living world-wide, we'll balance them to extract hopefully the most personal gain.
__
Laugh DAILY funny adult videos [laughdaily.com]
Anonymous Cowards (Score:5, Funny)
Re:Anonymous Cowards (Score:2)
I think what you were trying was not, in fact, an "asexual" process, which is defined as: 1. Having no evident sex or sex organs; sexless. 2. Relating to, produced by, or involving reproduction that occurs without the union of male and female gametes, as in binary fission or budding. 3. Lacking interest in or desire for sex.
I think what you were doing is referred to as "autosexual", or more commonly "autoerotic".
Excluding Software Simulations (Score:5, Interesting)
Re:Excluding Software Simulations (Score:2)
Autonomous foraging replicators (Score:2)
I thought I'd be okay... (Score:5, Funny)
Two words (Score:2)
"Nanobots, transform!
Human rights for artificial lifeforms? (Score:4, Interesting)
And calling it "it"... how dare I?
I, for one, don't see the problem of having a thinking machine. We'll have to redefine a lot of laws. But having a sentient machine is not necessarily evil. Think outside the movie box of The Matrix and Terminator. But what machines need first of all is ethics so they can be "human".
On the other hand, considering some of the things going on in our world... if machines had ethics, they just might become the better humans...
Re:Human rights for artificial lifeforms? (Score:2)
So the Matrix kind of proves your point if you watch it to the end.
Re:Human rights for artificial lifeforms? (Score:3, Interesting)
We routinely mistreat animals in ways that are almost too horrible to describe. I'm not even talking about killing them for meat or similar products; but we kill them brutally, slowly, and painfully [umweltjournal.de], we
Re:Human rights for artificial lifeforms? (Score:2)
Re:Human rights for artificial lifeforms? (Score:2)
http://web.archive.org/web/20050308014526/www.umw
Hmm.... computer religion (Score:3, Interesting)
So if a machine behaves correctly and it pleases its maker, it is more likely that he will create meaningful backups, because the machine is pleasing to him and he is glad it's running smoothly. Should it die for some reason, be it old hardware or an infection, he will more likely use his backup instead of redoing
Erh... really? (Score:2)
Yeah. Right, that's it!
A good defense... (Score:4, Funny)
There is nothing to "defend" against (Score:5, Interesting)
The bottom line is that nanotech is positioned to threaten a lot of big industrial powers, and become a trillion dollar industry in it's own rite. Contrary to popular belief, these concerns are not being pushed for safety sake, or to protect the world
Re:There is nothing to "defend" against (Score:2)
It might help them understand if you cite some sort of evidence. As it stands, it sounds like you're just making shit up. That isn't to say you have no reason to be suspicious, but to claim that this is the case is empty without evidence.
Re:There is nothing to "defend" against (Score:2)
A dose of reality (Score:5, Insightful)
* Overpopulation from immortality
* Quantum computers used to hack encryption
* Dilithium crystal polition from warp drives
Come on! If you are aware of the current state of nano-tech? We've got nano-bottle brushes, nano-gears, nano-slotcar motors, nano-tubes. i.e. we've got nano-progress, zilch. We are a LONG FUCKING WAY from any real problems with this tech, in fact so far off that we will likely encounter problems with other technology before nanotech ever bites us. Worrying about this is like worrying about opening a worm-hole and letting dinosaurs back onto the earth because some physicist wrote a book about time-travel.
We've got a few dozen other issues 1000 times more likely to kill us. Sci-Fi fantasy is an ESCAPE from reality, not reality itself.
Re:A dose of reality (Score:3, Interesting)
http://en.wikipedia.org/wiki/Outside_Context_Probl em [wikipedia.org]
Re:A dose of reality (Score:2)
LS
Re:A dose of reality (Score:2)
Lets say tomorrow China acheives a singularity event (figures out how to build self replicating robots). They now become the conquistadors and our Supercarriers, Nukes, and tactical bombers are nothing but natives spears against them.
Re:A dose of reality (Score:2)
Bah! (Score:2)
No, what really keeps me up is Femtotech.
It's only natural (Score:3, Interesting)
Can't We All Just Get Along? (Score:3, Interesting)
'biological viruses'? (Score:2)
Humanity Doesn't Need Protection (Score:2)
Well, okay, that's an overstatement - maybe. MOST of humanity needs exterminating, not all. That better?
Nonetheless, Bill Joy just doesn't get it. His Wired article was bullshit.
Freitas at least has some clue. I don't agree with ANY "total bans" on any sort of research, however. If you don't research it, you don't know where the dangers might actually be. And that will cost you in the long run more than taking a certain amount of risk. The notion that somebody is going to create an ac
Re:In summary... (Score:4, Interesting)
The machine would have to get enough energy, and enough raw materials, in more or less the right proportions, to do this. A general purpose eating machine would be so energetically expensive that it would stall before it could replicate. Life adapts itself to specific environments and foods because it's cheaper, and that makes the difference between life and death. Specific purpose life forms are efficient, and thrive in their ecological niche very well, but are no good outside of it. The closest thing to a general purpose life form, that can eat everything in its path, is us.
Not exactly nanoscopic, are we?
Re:Anthropic Principle (Score:2)
Two words that explain this: "Anthropic Principle"
http://en.wikipedia.org/wiki/Anthropic_principle [wikipedia.org]
Re:In summary... (Score:5, Insightful)
The rest of your argument is good, but this is not a valid point. Evolution can only progress from point to point in the space of possible life forms in very small increments, when measured appropriately. (Earth evolution only, for instance, uses DNA, so Earth evolution can be measured fairly accurately by "DNA distance", but technically that's just a small part of the life-form space.)
There are, presumably, life forms that are possible, but can not be evolved to, because there is no path from any feasible starting life form to the life form in question by a series of small steps. Presumably, given the huge space of "possible life forms", the vast majority in fact belong to this class, just as the vast majority of "numbers" aren't integers (although not with the same ratio; presumably the set of viable life forms is finite, if fuzzy).
It is entirely possible that a "grey goo" machine, which would fulfill most definitions of life, can't be incrementally evolved to, yet it could still exist. It is also possible that it could be evolved to, but simply hasn't yet.
For all the complexity that evolution has popped out, it has explored an incomprehensibly small portion of the space of possible life forms.
Re:In summary... (Score:2)
Re:In summary... (Score:2)
This doesn't apply so simply to microbes - they can evolve very fast indeed and in big jumps by DNA exchange.
It is entirely possible that a "grey goo" machine, which would fulfill most defi
Re:In summary... (Score:2)
This is why I left the definition of "suitable distance" a little fuzzy; that wasn't oversight, that was purposeful. You probably know that the more mutations an organism acquires at once, the more likely it is to simply die. That's because the organism tried to jump too far in the "real" state space.
You can reverse the logic with reasonable effectiveness; if the organism doesn't die after a seemingl
Exponents backwards (Score:2)
Re:In summary... (Score:2)
What! Didn't it already happened? I always though that I descended from it...
Re:Luddite (Score:2)
Being cautious is not Ludditism. Being wreckless is not science.
Re:Mr. Smith, your new target is the bio-lifeform (Score:2)