Facing the Dangers of Nanotech 172
bethr writes "Technology Review has a Q&A with Andrew Maynard, the science advisor for the Woodrow Wilson International Center's nanotechnology project regarding the dangers of nanomaterials and why we have to act now." From the article: "Individual experiments have indicated that if you develop materials with a nanostructure, they do behave differently in the body and in the environment. We know from animal studies that very, very fine particles, particles with high surface area, lead to a greater inflammatory response than the same amount of larger particles. We also know that they can enter the lining of the lungs and get through to the blood and enter other organs. There is some evidence that nanoparticles can move into the brain along the olfactory nerve, so this is completely circumventing the blood-brain barrier."
But, but, but... (Score:1, Interesting)
Didn't I just read something about ancient swords [slashdot.org] using nanotubes?
Uh... that's f*cked up. (Score:5, Interesting)
So the best way to make more nanites is to have the nanites make more of themselves. Seems pretty straight forward... only everytime we go about doing it we run into this little problem.
Mutations.
So we build these guys to start replicating and to stop replicating when we want them to... but when you make a billion of something you end up with some odd mutations. Even if you are talking about
Released, this nanite could theoretically convert the earth (see "grey goo") into a giant ball of itself.
Now I know this thread is going to be long, because so many of you very smart people will have so many smart ideas about how to make this safe. I'm glad you have these ideas and I'm glad you're voicing them. Some of them might even work.
What scares the hell out of me is that you're not the people working on this.
Re:I smell nanoparticles... (Score:5, Interesting)
Great idea to treat brain cancer too.
The idea is to modify certain magnetic nanoparticles so that they can attach to the cancer cells. Then by applying a vibrating magnetic field, we make make the nanoparticles vibrate and generate heat. As a result, the cancer cells get killed and the amount of affected good cells is very small.
But, I think I need a tin foil hat.
Re:You know what this means... (Score:2, Interesting)
Re:Poor logic.. (Score:3, Interesting)
The founding fathers were activists. As was Any of many people that caused changes.
Just thought you might like to know that.
Re:Scale matters, and so does hype (Score:4, Interesting)
And that's exactly the point - slow down cowboy until you have some idea of what you're doing. The recombinant DNA restrictions worked exactly as designed - people slowed down a bit and studied potential downsides, worked on mitigation strategies (P level confinement - now widely used on our War on Terrorism(R)(TM)(Patent Pending by Johnson's wax)).
Hopefully real nanotechnology will turn out to be more than marketing and venture capital hype, but it behooves us to look at potential pitfalls as well as potential progress. Besides, you should be able to get some pretty good anti terrorism funding by doing that kind of research these days.
Re:Uh... that's f*cked up. (Score:3, Interesting)
However, in the spirit of brainstorming, it seems that if you create enough redundant and functionally diverse systems in the nanomachine to check itself out, and then destroy itself if it didn't check out correctly - mutations would become statistically impossible. A single bot being assembled in which all 15 self-validation/autodestruct mechanisms are broken is incredibly unlikely, even considering the number of mutations, and all that's needed is for one mechanism to function correctly to eliminate the problem.
Still, though - at this point we're talking about programming, and everyone knows that with programming comes bugs, one of the most common being the infinite loop, coincidentally
Re:Uh... that's f*cked up. (Score:3, Interesting)
First of all, self replication should only be attempted after many years of successful nanotechnology, if at all. It's much safer to have two or more types of nanobots that can produce only the next type in a cycle, but not themselves. This lowers the probability of run away replication, because any point in the chain can be disabled. Having choke-points or environmental controls on reproduction is also a good idea.
Probably the single biggest safety measure for individual nanobots is lots of redundancy and cross checking. Every nanobot should be a collection of independant modules, all of which must cooperate in order to complete any task. Additionally, each module should be able to trigger a shutdown of the entire nanobot if inconsistancies arise. Self repair should be avoided at all costs because it is much safer for working nanobots to disassemble the broken ones and build new ones than to allow random changes to evolve within a self repairing and self replicating system. Cryptography will probably also play a large part, because traditional error checking will not be adequate to detect every error in trillions of nanobots, each executing trillions of instructions a second. Additionally, encrypting communication between modules and even instructions and data in memory will serve as protection against intelligent hacking attempts at modifying the internal state of the nanobots.
As part of the redundancy, it makes a lot of sense not to have truly autonomous nanobots, but instead require the environment to supply them with critical components, energy, or control without which they cannot function. It's much harder to make grey goo if every nanobot requires a complex chemical to operate that doesn't occur in nature and cannot be produced by the nanobot, especially if that chemical is what provides its energy to operate.
Evolution should never be allowed in the design of complete nanobots. Components can be evolved to be maximally efficient, but the overall structure and controls must be rigorously verified to ensure safe operation.
Just as aside, the grey goo scenario has already happened at least once on Earth. It's just more of a greenish goo, with some collections of larger un-goo-like structures.
How to hurt people, in quantity, cost-effectively (Score:3, Interesting)
Back in the eighties, a friend of mine quit a job (programmer) with a defense contractor, when he found out:
(A) The firm was making cluster bombs
(B) from dark-red plastic, because
(C) plastic isn't revealed by x-rays, and red is hard for surgeons to see during surgery.
The point was not to kill large numbers of people, but to injure large numbers of people in such a manner as to require lots of expensive medical personnel, thus winning the war by attrition.
Immoral? That's a judgement call.
Cost-effective? The defense contractor thought so.
-kgj
Re:Uh... that's f*cked up. (Score:3, Interesting)
Read what I wrote. Making individual nanobots capable of replicating themselves is a mistake. Allowing nanobot model A to build nanobot model B, and model B to build model A is much different. You have have the chance of a runaway scenario if you make the control channel for each nanobot separate. Keeping nanobots A and B mostly separate from each other is even more secure.
Are you sure that you are not just being overly paranoid. Nanobots are not some disgruntled slaves just looking for an opportunity to rebel. Also, note that these things do not have much in the way of mass (think just a few million atoms at most), forget processing power. you want these things to run AES on themselves??? So what is one nanite out of a hundred gets a bug, it probably won't last long anyway. also note that nanobots are delicate systems and it takes a lot of effort to get even theoretical ones which work. Having one which could work after getting a mutation would probably the the engineer who designed it the equivalent of a nobel prize.
Most likely to be of much use nanobots will need at least as much processing power as current desktop PCs, probably more. Even if they are totally headless and controlled via wireless it makes sense to encrypt the communications channel and make the nanobot shut itself down in case of a fault. Don't forget that not only are random mutations a concern, but also intelligent hackers trying to make the nanobots do things they weren't supposed to do, perhaps using other nanobots. The reason self repair is dangerous is that it involves autonomous self modification, which introduces more possibilities for undetected errors in operation. For instance, the worst case is when sensors fail, causing the nanobot to believe something is broken when it's not. This leads to what the nanobot believes to be valid repairs which actually introduce unwanted behavior. In terms of pure numbers, *eventually* humanity is likely to produce more nanobots than there are biological cells. At that point, evolution is clearly a concern.
First of, removing autonomy defeats their purpose to a large extent. it is not really possible to use these things effectively if you have to keep them in a tank of exotic chemicals just to keep them from falling apart. Evolution probably won't come into the design of these things even if we wanted it to be there. refer to my previous point about mutation in these things.
Modern medicines are basically just complex chemicals but can be injected into the bloodstream. It's not hard to create inert chemicals that could be used as the signaling device for nanobots in the human body.
Repeat after me until it sinks into your head: Nanobots are not out to kill me. Nanobots are not out to kill me. Nanobots are not out to kill me.
Neither are viruses, bacteria, or prions, they're just reproducing and mutating like nature intended. The side effect is that sometimes they kill us.