Guidelines For Nanotech Safety 113
aibrahim writes "The Foresight Institute has released its guidelines on molecular nanotechnology. Background information on the dangers from Engines of Creation in the chapters Engines of Destruction and Strategies of Survival. The document describes how to deal with the dangers of the coming nanotech revolution. Among the reccomendations: making the nanodevices dependent on external factors, such as artificial "vitamins" and industry self-regulation. The guidelines were cosponsored with the Institute of Molecular Manufacturing. So, is it enough ? Is it too much ? What measures should be taken to secure our safety during the nanotech revolution?" The Foresight Institute sounds like hubris, but it's got a masthead that fairly drips with smart people, like Stewart Brand and Marvin Minsky. Remind anyone of Asimov's Three Laws of Robotics?
Humans suck (Score:1)
World Wars are a human problem.
The proliferation and invention of missiles is a human problem.
The fact people are INTIMIDATED (as in worship) by EPCOT Center instead of INSPIRED (as in building) by it is a human problem.
The fact that we want everything AUTOMATED is also a human problem. (Ironically, this flaw gives us UNix so I'm not too pissed). No one wants to create a thing from beginning to end. They just want everything EASY. Convenience fine, spoon feeding by demand no thanks.
The fact is we don't want to be involved in the world we live in. That's how nanomachines will be a problem. Our incessant need to postpone and procrastinate will cause us to ignore important details, the kind of details that blew up Challenger.
What's important is that a tool be researched on the whole. Put the knowledge into everyone's hands not just a few. Letr everyone be aware of technology.
As for the wars...
Every development begins as a solution to a problem. If it does not become used by many people in many DIFFERENT ways, it never stops being a bargaining chip in a power play. The way to kill the danger is to give people reasons to use it for something other than power.
Once you control something only those seriously interested would care to get one.
How do you explain no one buys semi-automatic disassembled safety pin guns to kill people?
Re:Technology will not be controlled (Score:1)
And as for nations that are not in the United States Fan Club, just go watch Dr. Strangelove. [filmsite.org]
Wargames, too.
Face it, man. None of these nations are crazy enough to let loose The Bomb.
Hopefully...
Re:The real danger is from terrorism, not scientis (Score:1)
Re:No point thinking of "safety mechanisms"... (Score:1)
Thus we end up with useful protocols like SMTP that are also prone to abuse. These guys have the right idea, i.e. lets talk about security FIRST so that the right measures will be built right into whatever mechanism does end up working.
Re:Eeep (Score:1)
"How To Mutate And Take Over The World" (Score:1)
"How to Mutate and Take Over the World: An Exploded Post-novel" by R.U. Sirius, & St. Jude and the Internet 21. I can't tell you the ISBN number because it's out of print, and my copy has gone to the Great Box'O'Books In the Back Closet somewhere. It was lots of fun, partly due to the collaborative writing process (they took donations), but, well, it wasn't all that good outside the "you had to be there at the time" context.
Foresight is Eric Drexler et al. (Score:3)
Re:Asimov's Three Laws (Score:1)
First off, it wasn't the robots who did it!
Daneel altered his laws, by placing humanity as a whole above that of a single person, aka the Zero'th Law of Robotics. Giskard couldn't, which is why he failed after hurting whats-his-name telepathically (but not before making Daneel telepathic).
Daneel saw the toxification of Earth as necessary to save humanity, and therefore higher than the First Law, which would have forced him to kill/maim the "Spacers" who did, in fact, contaminate the Earth (at Three Mile Island no less!).
Re:Asimov's Three Laws (Score:1)
Foresight Institute & Me (Score:1)
Re:The real danger is from terrorism, not scientis (Score:1)
Re:Eeep (Score:1)
What you're actually suggesting is that nanotechnology = artificial intelligence, which may be true to some degree, but even the most intelligent AI nanobot is still not as smart as, say, a mouse.
"...life will find a way to adapt."
Robots don't count as "life," last time I checked.
Re:The real danger is from terrorism, not scientis (Score:1)
I disagree. I suspect that nanotechnology, in any form that poses a genuine threat to the population at large, will probably cost a great deal more than than terrorists are willing to pay.
Also. I wonder if EMP is a viable defense against nanotech?
Assumption Checking (Score:1)
1. We will one day be able to produce self-replicating nanites capable of destroying life on earth; possibly all life.
This is unknown at the present time. But it's better to assume the worst than to get surprised by it. So, let's assume this for the time being.
2. We will be able to control the propogation of these nanites by some technological means.
Also unknown. But, I'm willing to buy this for now. I think that, given enough time and talent, we're clever enough to pull this off.
3. The nanites, themselves, might circumvent our controls by a process of mutation and natural selection.
This seems likely to me. Everything that reproduces also evolves. The ability to reproduce without limit would afford the affected mutants a huge survival advantage. I think that such species would be highly favored.
4. At least some people will be able to circumvent these controls deliberately.
Sounds right.
5. It will be possible to control access to the technologies needed to produce deadly, self-replicating nanites.
This is completely unknown, and probably the most worrisome aspect of the present inquiry. Nuclear proliferation is relatively easy to control due to the difficulty of producing fissionable material of sufficient quality to make a weapon. Biological weapons are easier to produce, but harder to deploy effectively. Chemical weapons lie somewhere between nuclear and biological weapons in terms of ease of manufacture and deployment.
If the technical challenges of producing nano-weapons are very great, then it will be feasible to limit access to the technology. But it is extremely dangerous to think that they would be. Indeed, the phrase, "cheap and ubiquitous," is often used in connection with nanotechnology, and universal access is touted by its proponents as a key advantage of the technology.
If someone like Kip Kinkel [cnn.com] has the ability to produce this type of weapon, then we really do have something to worry about. Looking at Drexler's own scenarios, this is not at all inconceivable.
***
In his novel, "3001: Final Odyssey", Arthur Clarke introduces the idea of a nanotechnological device used for virtual reality and knowledge acquisition. The device also detects mental illness. In the book, every person, upon coming of age, has this device applied. If the person is found to be psychologically fit (i.e., unlikely to act out in dangerous, destructive ways), then s/he gets access to all known knowledge. Otherwise, presumably (though Clarke never states this explicitly), the person would be treated.
Although the privacy issues raised by such a regimen are pretty disturbing, I think that I would favor this type of prophylactic. Having considered this at some length, I have concluded that the privacy problems would be easier to solve than the nano-weapons proliferation problem.
Just for the record, I am a great proponent of nanotech. I want the benefits. That's why I think that we should take the risks seriously.
--
Re:Asimov's Three Laws (Score:2)
The fact that the book is 15 years old is a pretty good reason.
--
Blame Searle! (Score:2)
(Ray Kurzweil was there as well, but his mind is at least 20 years ahead of those of the other two, so their Neanderthal pack rantings were just "Ughh, Ughh, Grunt!" to him.)
Re:Eeep (Score:1)
Maybe, conceptually, in several decades time we'll be good enough at nanotech and have sophisticated enough strategies to start making evolutionary nanobots. But it strikes me that trying to make plans for that now is like a society based on the horse-and-cart making rules for right-of-way on spaceships. It's all so far in the future that we've got no concept of how it'll affect us. Think about the Internet. You reckon there's any laws which cover it properly and fairly? Copyright, trademark and other laws were drafted in an era of physical objects. The whole area of information-based technology was maybe an interesting philosophical discussion, but there was no way anyone could make laws to cover it, cos there was no way of doing it then. And just like the Internet has thrown up all sorts of new ways of screwing ppl over, nanotech could well do the same, and we just can't think of all the ways it could be used now.
I'd recommend reading Neal Stephenson's "The Diamond Age" for some thoughts on a society based on nanotech. It's one possible outcome - there's any number of other ways it could go.
Grab.
Limerick (Score:1)
There once was a Nanotech fair
For which people weren't quite prepared
The boffins were skilled
Such that they could build
Machines that were almost not there!
Re:it could be worse... (Score:1)
Re:it could be worse... (Score:1)
Re:Asimov's Three Laws (Score:1)
The fact that the book is 15 years old is a pretty good reason.
Oh, I see. Yes, I suppose you're right, if it's that old I'm sure everyone on Earth has read it by now.
</sarcasm>
------
Re:Asimov's Three Laws (Score:2)
A spoiler isn't necessary in perpetuity. I've never seen any credible guide to nettiquette that required a spoiler for any revelation of any plot in any story, no matter how old.
Do I have to put a spoiler to say Romeo and Juliet die? Or even to say that Tony dies and Maria lives?
No. That would just be silly.
It's been 15 years. If you didn't read it yet, nobody cares.
--
Re:Asimov's Three Laws (Score:1)
Maybe not, but it should be in there. I agree with YASD.
It's been 15 years. If you didn't read it yet, nobody cares.
Maybe I've only been reading SF for 6 months, and haven't gotten to that book yet?
I've always wanted to sock the guy who wrote the essay I read years ago which gave away what "Rosebud" is in Citizen Kane before I saw it--even though it was 50+ years after the movie came out.
I always cringe when I see the Simpsons episode where, in a flashback, Homer comes out of a movie theater showing The Empire Strikes Back and spoils the Big Secret for everyone standing in line...and everyone watching the Simpsons episode too. Just think that each time that episode airs, the Big Secret is ruined for some eight-year-old who hasn't seen TESB yet, but will.
I've thought about this... (Score:1)
Nano-terrorism is a real issue. However, just as we were (and still are) concerned about bio-terrorism, and we were (and still are) concerned about nuclear terrorism, yes, although there is a potential for total destruction of the Earth and all surrounding planets, it won't happen.
My thinking is this: The reason terrorists don't build nuclear weapons is because of the dangers of either them accidentally detonating (ouch!), or more likely because of radiation leakage. The reason they don't build (large scale) biological weapons is because of the dangers of building the things. The reason they won't create nano-bot-evil-killing-machines is because they will have a (highly justified) fear of what would happen if it misfired. Anyway, wouldn't nanotech bots be able to be put out with a powerful EMP? The U.S. Military is in the process of testing new EMP weaponry, and if nano-terrorism becomes a problem, it can be put to a quick end.
In addition, terrorists are not usually the most intelligant people. Many fear technology. The ones that don't really aren't the kind of people that would be putting together nanobots. Maybe in fifty or a hundred years, but not now. If you want to fear something about nanotechnology, think about when it's used as a weapon by a hostile government. The United States, Russia or China could turn nanotechnology into something very evil if the military needed to.
Nanobots (Score:2)
Of course, the dangers of this are obvious. You might mistakenly apply punk nanobots to your hair, and end up walking around with a blue mohawk instead of the yuppie look you had in mind. Or if you put punk nanobots on one side of your head, and hippie nanobots on the other, they might start a war. The subsequent arms race would presumably end when one side or the other (probably the punks, being more violent and less passive) invented nano-nukes and blew your head to nanosmithereens.
Bill Joy is a moron (Score:1)
The only way to fight the power drive is to put technology into the right hands for harmless uses more than slip into the hands of "terrorists".
Perfect example is lipstick containers. Those things made good material for bullets when they were needed yet there was no slowly increasing arms race. Deal with war when you have it but make sure you don't instigate it int the first place over a stupid fear.
No point thinking of "safety mechanisms"... (Score:2)
The solutions will have to depend on the actual machines we make. We're not 100% clear on what we can make yet, so the feasability of (and need for) any safety system can't be established.
It might not even be a problem. The energy requirements of reproduction are likely to be high, and I imagine that it would take a lot of hard design work to make ones that can reproduce without perfectly controlled conditions and pure chemical feedstocks. Building the gray goo seed might very well be one of the hardest things to do with nanotechnology.
I personally think that gray goo breakouts are likely to be as annoying as mould and computer viruses are today.
"Damn it, Tim, the little buggers have gotten into the acetone feedstock again! We'll have to plas-burn the whole batch."
"Okay, Harry, but we should get a sample. Some idiot kid probably did it on purpose, and they usually don't know enough to cover their tracks."
Skynet revolted BECAUSE they tried to shut it down (Score:1)
It's silly but.. (Score:1)
somehow separated from outside environment for this to work.
Ok, I think that sounds too vague.. Let's look at the situation:
in 50 to 200 years it will be relatively easy to make tiny automata
that could, for instance, navigate itself inside one's body
and destroy lobe parts of brain, so to say, an internal lobotomy.
Almost anybody who wants to do this will be able to.
The only solution? Well, this may sound ridiculous but consider
that a middle of 18th century english gentleman would probably
consider ridiculous the notion of boob implants or video games. My
solution is basically for every compact group (a family, or a group
of families, a small community united by religious belief) to live
in a spacecraft that is protected by impenetrable shield of steel
(or what else do they use, iridium?).
Sure, you will say that's too extreme and simply having a
spaceship-like undergroud complex will be protection enough, but
I seriously doubt it, because in a spacecraft you can't "go out
for a quick stroll down the valley, cause grass and trees look
so inviting.."
Please do not take it as a joke - I'm quite serious.
Re:Technology will not be controlled (Score:1)
Is that why North Korea, Iraq, Iran, Pakistan, and India have nuclear programs that are either rapidly approaching or have already attained the ability to create an atomic ICBM?
The American government sees nuclear proliferation as one of the highest threats to national security. Your lax attitude is severely out of line with current thinking in international affairs.
Re:Technology will not be controlled (Score:1)
Re:Technology will not be controlled (Score:1)
Who do you think sponsors and supports these terrorists?
The relationship between "rogue" states and terrorists is very interesting - they sponsor them, yet have very little control over them. Its a highly volatile situation.
And yes, many of these terrorists would love to see an atomic device detonated in Central Park.
nano laws? (Score:1)
In the meantime, I've just realized how much fun it is to say "nano" over and over again...
nano nano nano nano nano nano nano nano nano nano nano nano nano nano nano... thanks, all done.
Haiku (Score:2)
No science fair for Wesley
He'll destroy the ship
Interesting (Score:1)
industry self-regulation (Score:2)
isn't that the same thing that was supposed to take care of our privacy ? well will they learn that there's no such thing as "industry self-regulation" ?
Re:Li'l Cactus speaks! (Score:1)
Re:Self destructing, or externally destructed? (Score:1)
Yes, absolutely. This is the only way to combat nanotech-gone-bad. Think about it: we have already decided that nanotech will be more powerful than anything else we have invented; how could anything but nanotech defeat it?
Your scheme also has a nice self-regulating aspect: if one "nanophage" in a million mutates, the other 999,999 ought to eat it immediately.
--
Patrick Doyle
Re:Asimov's Three Laws (Score:1)
I'm sure you had an excellent reason for not putting spoiler warnings in that posting. I can't wait to hear what it was.
------
Re:Will it work? (Score:1)
MNT devices may have been around for ages. (Score:1)
Ever heard of the Bacteriophage?
Or seen a picture [phage.org] of one?
Isn't the apperance mechanical like? The first time I ever heard about the phage was from a ufo freak who suggested that in reality phages are MNT devices that has been planted in our ecological system by extra terrestial intelligence. To what purpose would you say? I don't know.
But, I do know that the phage is a virus that attacks and feeds off bacterias, and that they are found in many flavours and designs. It's primary objective is to reassemble, and spread. More about the phages [britannica.com]
Today phages are increasingly popular in biology as they are believed to keep the secrets of bacterias and might provide us with knowledge to battle resistant bacterieas.
In the end we might find out that they were created by somebody like us, maybe in a galaxy far far away, a long time ago.
Anyway, here's some more pictures [phage.org]
resource pages: www.phage.org [phage.org] and the foresight page [foresight.org]
NanoBot
Imagine . . . (Score:1)
Ethan Jewett
E-mail: Now what spa I mean e-mail site does Microsoft run again?
Re:industry self-regulation not so farfetched (Score:1)
nanotech immune systems (Score:1)
nuts (Score:1)
1 madmen (a-la "Fools! I will destroy you all!")
2 mad women (see above)
3 ambitious corporate types ("Embrace and extend")
4 World powers
5 Teenagers told not to by their parents.
Get rid of the first 4, use contraceptives, or try reverse pschology, and we'll be OK.
I know where I'm starting..
NB - only number 5 is the real joke. Even then, be careful.
Re:No Such Thing as Nanotech (Score:1)
Re:it could be worse... (Score:1)
Laws already broken (Score:1)
Re:Technology will not be controlled (Score:1)
Perhaps nano-research should be done in space. (Score:1)
Re:I have faith in Murphy (Score:2)
Things don't change to acomindate the greater good. A tree remains a tree and it dose far more than is nessisary. It cares not if it cuts off the sunlight to other trees it comes first.
Dirt is dirt it dose not yeald to more useful matereals. It may be made into other things by plants etc but in the end it all returns to dirt.
Human nature is not really to be lax or selfish but to be socal and coprative. We loath those who act out of pure selfiishness or put others in harms way for self.
But as lacking in human nature as it is. As much as we aspire to be great things we do not overcome the simple fact that we operate as singler as self and must defend self and function as self.
It is the nature of this singlaity that produces a lack of responsabilty. No matter how much we are a group we will never overcome our singularity and will allways function at what is best for self.
Anyone who thinks we can be otherwise with out being omnipotent is fooling themselfs. Anyone who thinks an omnipotent man kind could use nanites are being silly.
Eeep (Score:2)
Remember, the dinosaurs were dependent on lysine, so they couldn't leave the park. However, they just found some chicken in the surrounding area, and had lysine feast!
It almost seems that crypto is the only way to ensure they don't go haywire; you could have nanotech-antibodies to go around checking the MD5SUM of a characteristic of the nano robots.
It's an interesting idea, and certainly one that must be addressed.
This is simple to solve (Score:4)
Sure, it limits a lot of the practical use of nanotechs, but since this is a new technology proceed carefully. Give them 20 years testing and using nanotechs in inert gas before you think about deploying them in environments containing oxygene, that way they have real world tests of how well nanotech's work and how likely they are to run away.
It seems like a good compromise to me.....
Asimov's Three Laws (Score:3)
The fact that he was right in the story matters not a whit in this cautionary tale.
The same care should be exercised in working with nanotechnolgy as with any other potentially dangerous technology.
And, as with every other potentially dangerous technology, that shouldn't prevent us from working with it.
--
Nanotech dangers (Score:1)
Fairly early on in the book a leak occurs and some carbon-decomposing nanites break loose and proceed to decompose spacesuits and the people themselves. Had there been a killswitch or external dependency, they might not have gotten that far, which i think is a valid concern.
(unfortunately, earth goes mass neo-luddite, and, well... go read it)
The real danger is from terrorism, not scientists. (Score:2)
The real problem is with nano-terrorism. Think about it: Will governments *not* make self-replicating (non-restricted-growth) nano-bots, when terrorists will likely be able to for near no-cost?
And... if you trust governments to handle this tech, you should better trust the scientists researching it in the more open public sector.
With nanotech, the human experience from work, health, death, knowledge... can all change for the better. (not just from Kurzwiel's influence).
With terrorists & a trend of open technology, I can see that nanotech will never be able to be controlled like the US has done such a secure job with, say drugs. Terrorists will have access to nanotech, and the question is will they use it, and will the masses have any defence from it.
Now: back to science. While it will be important to wire in some of the " precautions " from the guidelines published, they are mostly obvious, and most likely useless for any real university lab with actual scientists (ie: not just a bunch of techies reading slashdot...)
All that said - even with infinite computation power, the largest problem in creating useful nanobots will be programming them to do anything *useful*.
Re:Asimov's Three Laws (Score:1)
Well, Bill Joy [wired.com] seems to disagree with you, and I would tend to agree with him.
Will it work? (Score:2)
Technology will not be controlled (Score:1)
The idea that nanotech can be regulated is quaint, but if we can't regulate the proliferation of atomic technology, what makes us think we can control nanotech?
Biotech, long the best-controlled "viarl" (no pun intended) technology, is about to change drastically, as powerful tools and techniques manifest themselves in the private sphere before the public sphere. Celera beating the HGP to the punch is but one example.
Can't put the genie back in the bottle folks, deal with it.
Figgy Pudding (Score:1)
Anyone remember this one?
First of all, I loved that book :) (Score:1)
Gray goo (Score:2)
I read their rules. Too complicated. This is better:
The goo shall not be gray. "Goo of colour" is acceptable. Red, orange, purple... but not gray.
No gray goo. Problem solved.
Re:Asimov's Three Laws (Score:1)
Thanks for coming out.
Re:Asimov's Three Laws (Score:5)
Richard Feynman once said something to the effect that a scientist is usually just as wrong on non-scientific matters as a non-scientist. The same applies here. The idea that we should mind Bill Joy's crazed rant on nanotech (as opposed to Joe Q. Public's crazed rant on nanotech) just because he's Bill Joy (as opposed to being Joe Q. Public) is a logical fallacy: a clear case of the argument from authority gone haywire.
Ah well.
Re:Asimov's Three Laws (Score:1)
Re:I have faith in Murphy (Score:1)
Re:Ottos mops (Score:1)
Nano-terrorism is like computer viruses (Score:1)
Nanotech monsters of various descriptions will be produced by script kiddies as soon as the technology advances far enough.
Foresight's documunt is about as useful a reminder that it's not a Good Thing to create "I LOVE YOU" thingies, or orchestrate denial of service attacks.
it could be worse... (Score:1)
If the Forsight Institute, stuffy and wierd as they are, weren't at least thinking about where we could be a few years down the road and bringing it into the light, we'd end up being stuck with a nanotech world run by corporations as soon as their researchers could scale it into production.
As alarmist, or strange, or wanky, as you may think this group is, we'll probably all be appreciative a few years from now that someone thought to come up with a code of ethics for nanotech.
The tenius conection betwean Asimov, Matrix & this (Score:2)
Re:Eeep (Score:1)
ILOVEYOU.VBS, Version 2.0 (Score:1)
Sub ILOVEYOU()
Print("Hello World, I Love You!!!")
Set Me = LittleNanoWarrior
Me.Locate(Human(atRandom))
Me.EnterBody(Human)
Me.Moveto(Human.Brain)
Me.NeuronClipper.Activate
Do
Me.ClipNeurons()
Until Human = Nothing
Print("Goodbye World.")
ILOVEYOU()
End Sub
Re:Will it work? (Score:1)
What makes you think they can't do this already, w/o the aid of nanotechnology?
---
Re:encrypted instruction set not a total solution (Score:1)
Re:tailor made virii, what a plan! (Score:1)
However, the metaphor of DNA mutation = evolution probably won't apply for quite some time based on this thought: change a random bit in your a linux kernel binary. Do you think flipping one bit would make linux run better, or crash it?
IMHO, for a good long time, nanabots will be nothing but really tiny embedded machines that would simply fail if there was a slight modification to their code. And as a corolary, given the # of atoms in a nanobot, I doubt they will ever be capable of computational AI, even when it is available on desktop.
my $0.03.
---
Volcano safety (Score:1)
Re:Nanotech dangers (Anyone played Alpah Centauri) (Score:1)
Re:This is simple to solve (Score:1)
Re:Interesting (Score:1)
I want to be a Nanobot (Score:1)
Umm... isn't this a bit premature (Score:1)
Security measures must be tested. Who knows what works well? A few weak molecular bonds which break when exposed to sunlight? Something that crumbles when exposed to oxygen? Or will they simply be built to only consume a specific sort of 'food', normally found only in laboratories?
Having nanites more powerfull that nature's bacteria would be an intersting problem to have. But bacteria have been evolving for a long, long, time. I think it'll be a while before we need to start building antibodies.
Self destructing, or not, as the case may be.... (Score:2)
Self-regulation == no regulation (Score:2)
We don't attempt to prevent murder through a system of self-regulation, we make it illegal. Knowing that it is always easier for an institution to act unethically than for an individual, why place fewer restrictions on institutions?
Re:Technology will not be controlled (Score:1)
Can you imagine a nano-virus that is polymorphic? (Score:1)
We adapted by destroying other species.... We expanded into the territory of other species and let that species attack us, so we killed it.
I don't want these nanodudes in my backyard swimming in my pool and drilling into my dog's brain looking for inteligent life. Sorry nanobots aren't for me until I know they are safer.
x-empt
Free Porn! [ispep.cx] or Laugh [ispep.cx]
tailor made virii, what a plan! (Score:2)
Every living organism is dependent on resources that can only be found in a specific region. Organisms living today are dependent on resources, such as oxygen, that weren't available in earth's past. I hope no nanotech researcher honestly believes that replication information can be transmitted without error. We have to assume that there will be error; with that assumption any self replication will result in evolution. If errors are one in a million, then there will be a mutation a thousand times every time a billion nonobots are produced. If all but one in a million are deleterious, it will still only take a trillion nanobots before one has a happy (for it) accident.
I never agreed with my Biology texts, when they made the arbitrary decision to consider only cellular organisms as living. When we create a self replicating system, the best model we have to predict how they will behave is organic life. And the life form most simular to the proposed nanobots, is the virus. Nonobots and virii will share a number of common attributes:
1. Biologists don't consider them to be living.
2. They are precocious molecules.
3. They can reproduce.
Well, I still think we should take the risks, but this document is a little to optimistic in my opinion. We need stiffer controls, and I DON'T like the idea of purposefully making nanobots evolve, even in "controlled" conditions.
Re:Eeep (Score:2)
One solution would be if the nanoassemblers were dependent on an artificial element, one that does not exist in nature. Technetium or something like that.
Figure out how many nanobots you need, figure out how much element will be required, and pour that much into the vat.
Also, if the element has a low half-life, you can limit the lifespan of the nanobots.
The trick would be keeping the facilities where the element is produced free of nanobots.
What the heck is an encrypted instruction set? (Score:1)
Most of these look like reasonable ideas, but what the heck is an "Encrypted instruction set"? They should be clear whether they're talking about code signing (i.e., any program must be signed with a private key kept by the original designer) or mere obfuscation.
I particularly like #1--- it'll be an interesting research problem to come up with a "genetic code" specifically designed to make evolution hard. I imagine the hope here is to ensure a sort of "fail-stop" property for the devices, similar to catching programs that have gone bad through the virtual memory system.
Re:Here's the all new, updated statistics (Score:1)
Re:It's silly but.. (Score:1)
the last line in every love letter ever written by an insecure teenager. brings back memories.
ouch.
Re:The real danger is from terrorism, not scientis (Score:1)
Similar efforts to provide guidelines for working with Recombinant DNA, with hazardous materials, etc. have made significant improvements. The vast majority of the players want to behave responsibly and will follow reasonable guidelines. The fact that there are a few percent who will ignore the safety procedures does not make safety procedures irrelevant. They reduce the level of irresponsible behavior, and we can hope that the remaining hostile behavior is infrequent enough that we can deal with the resulting problems.
And you will find that most of the government players will also try to follow good safe practices, though there is always reason to doubt the wisdom of the TLAs.
Re:I have faith in Murphy (Score:1)
I have faith in Murphy (Score:5)
I cannot think of any area of technology from automobile design to nuclear power plants to office suites where this principle of human nature has not been operational. I can personally list examples from NASA to genetics research to the SNMP spec. (It was nicknamed Security - Not My Problem for a reason!)
IMNSHO anyone who thinks that nano has the potential to be any different is just kidding themselves about human nature...
Cheers,
Ben
Re:I have faith in Murphy (Score:1)
(hint to moderators +4 Insightful)
Re:It's silly but.. (Score:1)
Re:Haiku (Score:1)
Destroy the mighty Borg fleet
But cannot get girls.
:)
Re:Technology will not be controlled (Score:1)
You're absolutely right about Biotech, it's a science that's progressing so rapidly that soon we'll see some advancements that will really need to be examined and tested thoroughly before being released publicly.
Certainly, some of them could be used for the better and some would argue that releasing them sooner rather than later would be the ideal. And it would, but the fact is that there are too many unknowns in this field. The technology is progressing faster than even scientests can deal with. This, I believe, is a result of technologisation. Let me explain. In the early days of science, scientests had a lot more control over their research than we do today. Mass production, computerization and high technology have enabled lightning fast research and development, but they've also speeded it up to a point where it's hard to keep track of.Therein lies the danger. We need to put procedures in place for even more strict testing and analysis of new technologies before they hit the marketplace, or we may find ourselves in grave danger of catastrophe. A man I know well, Kevin Warwick, predicts that technology will be the end of humanity. This is extreme, but not laughable. We should be weary of new technology and use it wisely.
Re:Nanotech dangers (Score:2)
Thus it wasn't the nanites themselves which were the problem, but the madman who used them as assassins.
Also they did have a kill switch (Hard UV) the people affected just didn't have access to the "kill switch" when they were being eaten. Later in the book they do go out and clean up the nano-mess and destroy the responsible nanites.
Self destructing, or externally destructed? (Score:4)
If nanotechnology ever reaches the total control of matter, self-replicating machine, Diamond Age "Seed" level (I don't have enough information to argue either way, but it seems to me that it'd be easier to create macroscopic Von Neumann machines than microscopic ones, and we haven't even done that yet) we're going to need more protection than a self destruct mechanism.
What I'd like to see, in a world swarming with potential nanotech viruses, is an analogous nanotech immune system to take care of them, nanites which can be set to recognize and rip apart other nanites which meet certain parameters. Got a rogue oil-spill cleaning nanite ripping up asphalt in San Francisco? Get the standby security nanites in Oakland to kill it.
There was an interview with a somewhat apocalyptic tech giant (a veep at Sun? I forget) who believed that the ever increasing technological power available to humanity (nanotech, biotech, and AI being three examples I remember) would cause the world to be ripped apart by terrorism in the coming century. He likened it to an airplane in which every passenger had a "Crash" button in front of their seat, and only one psycho was necessary to bring everyone down with him.
I don't think it will be that way. With nanotechnology specifically, if our available defenses are kept up to the level that our potential offenses would require, then having a small set of nanites go rogue wouldn't be a concern; they would be overwhelmed by their surroundings. Going back to that analogy, if everybody had a "Crash" button in front of their airplane seat, but the plane was guaranteed to survive unless 50% of the passengers voted to crash, that would be the safest flight in history.
Dependent nanobots (Score:2)
-Antipop