Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Facing the Dangers of Nanotech 172

bethr writes "Technology Review has a Q&A with Andrew Maynard, the science advisor for the Woodrow Wilson International Center's nanotechnology project regarding the dangers of nanomaterials and why we have to act now." From the article: "Individual experiments have indicated that if you develop materials with a nanostructure, they do behave differently in the body and in the environment. We know from animal studies that very, very fine particles, particles with high surface area, lead to a greater inflammatory response than the same amount of larger particles. We also know that they can enter the lining of the lungs and get through to the blood and enter other organs. There is some evidence that nanoparticles can move into the brain along the olfactory nerve, so this is completely circumventing the blood-brain barrier."
This discussion has been archived. No new comments can be posted.

Facing the Dangers of Nanotech

Comments Filter:
  • But, but, but... (Score:1, Interesting)

    by LiquidCoooled ( 634315 ) on Friday November 17, 2006 @02:21PM (#16887570) Homepage Journal
    Haven't we had nanotechnology for ages?

    Didn't I just read something about ancient swords [slashdot.org] using nanotubes?
  • by neo ( 4625 ) on Friday November 17, 2006 @02:55PM (#16888138)
    I played a thought experiment with a very smart fellow. The goal of the experiment was to come up with a safe way to create self replicating nanites that could cure cancer. We had 1 nanite that would cure cancer, but it was, of course, slow. The goal was to create enough to heal an entire body.

    So the best way to make more nanites is to have the nanites make more of themselves. Seems pretty straight forward... only everytime we go about doing it we run into this little problem.

    Mutations.

    So we build these guys to start replicating and to stop replicating when we want them to... but when you make a billion of something you end up with some odd mutations. Even if you are talking about .001% mutation that's still 100,000 self replicating mistakes. If even one of those 100,000 mistakes is a mutation that just doesn't turn off self replication you now have a very bad problem.

    Released, this nanite could theoretically convert the earth (see "grey goo") into a giant ball of itself.

    Now I know this thread is going to be long, because so many of you very smart people will have so many smart ideas about how to make this safe. I'm glad you have these ideas and I'm glad you're voicing them. Some of them might even work.

    What scares the hell out of me is that you're not the people working on this.
  • by cyfer2000 ( 548592 ) on Friday November 17, 2006 @04:02PM (#16889208) Journal

    Great idea to treat brain cancer too.

    The idea is to modify certain magnetic nanoparticles so that they can attach to the cancer cells. Then by applying a vibrating magnetic field, we make make the nanoparticles vibrate and generate heat. As a result, the cancer cells get killed and the amount of affected good cells is very small.

    But, I think I need a tin foil hat.

  • by Deoxyribose ( 997674 ) on Friday November 17, 2006 @04:19PM (#16889462)
    Neal Stephenson had the idea in his book "The Diamond Age." IIRC they were called cookie cutters and used in prisions to discourage escape and as a method of execution. The book is one of my all time favorites and a great read for anyone remotely interested in nanotech.
  • Re:Poor logic.. (Score:3, Interesting)

    by geekoid ( 135745 ) <dadinportland&yahoo,com> on Friday November 17, 2006 @04:30PM (#16889614) Homepage Journal
    "Whenever I hear the word activist, I reach for my revolver."
    The founding fathers were activists. As was Any of many people that caused changes.

    Just thought you might like to know that.
  • by ColdWetDog ( 752185 ) on Friday November 17, 2006 @04:55PM (#16889962) Homepage
    There was actually a voluntary suspension of recombinant DNA research for a short time back in the '70's. Everyone started doing it again when the truth became clear: recombination happens in nature all the time, and the mechanism was such that naturally occuring recombination was doing all the things that scientists wanted to do.

    And that's exactly the point - slow down cowboy until you have some idea of what you're doing. The recombinant DNA restrictions worked exactly as designed - people slowed down a bit and studied potential downsides, worked on mitigation strategies (P level confinement - now widely used on our War on Terrorism(R)(TM)(Patent Pending by Johnson's wax)).

    Hopefully real nanotechnology will turn out to be more than marketing and venture capital hype, but it behooves us to look at potential pitfalls as well as potential progress. Besides, you should be able to get some pretty good anti terrorism funding by doing that kind of research these days.

  • by xappax ( 876447 ) on Friday November 17, 2006 @05:00PM (#16890042)
    Your point that nanotech is a high-stakes business is well taken. Just as with biotech, we should not give in too easily to the temptation and excitement of new possibilities before we have evaluated the dangers and genuinely checked our assumptions.

    However, in the spirit of brainstorming, it seems that if you create enough redundant and functionally diverse systems in the nanomachine to check itself out, and then destroy itself if it didn't check out correctly - mutations would become statistically impossible. A single bot being assembled in which all 15 self-validation/autodestruct mechanisms are broken is incredibly unlikely, even considering the number of mutations, and all that's needed is for one mechanism to function correctly to eliminate the problem.

    Still, though - at this point we're talking about programming, and everyone knows that with programming comes bugs, one of the most common being the infinite loop, coincidentally :)
  • by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Friday November 17, 2006 @05:05PM (#16890098)
    So we build these guys to start replicating and to stop replicating when we want them to... but when you make a billion of something you end up with some odd mutations. Even if you are talking about .001% mutation that's still 100,000 self replicating mistakes. If even one of those 100,000 mistakes is a mutation that just doesn't turn off self replication you now have a very bad problem.

    First of all, self replication should only be attempted after many years of successful nanotechnology, if at all. It's much safer to have two or more types of nanobots that can produce only the next type in a cycle, but not themselves. This lowers the probability of run away replication, because any point in the chain can be disabled. Having choke-points or environmental controls on reproduction is also a good idea.

    Probably the single biggest safety measure for individual nanobots is lots of redundancy and cross checking. Every nanobot should be a collection of independant modules, all of which must cooperate in order to complete any task. Additionally, each module should be able to trigger a shutdown of the entire nanobot if inconsistancies arise. Self repair should be avoided at all costs because it is much safer for working nanobots to disassemble the broken ones and build new ones than to allow random changes to evolve within a self repairing and self replicating system. Cryptography will probably also play a large part, because traditional error checking will not be adequate to detect every error in trillions of nanobots, each executing trillions of instructions a second. Additionally, encrypting communication between modules and even instructions and data in memory will serve as protection against intelligent hacking attempts at modifying the internal state of the nanobots.

    As part of the redundancy, it makes a lot of sense not to have truly autonomous nanobots, but instead require the environment to supply them with critical components, energy, or control without which they cannot function. It's much harder to make grey goo if every nanobot requires a complex chemical to operate that doesn't occur in nature and cannot be produced by the nanobot, especially if that chemical is what provides its energy to operate.

    Evolution should never be allowed in the design of complete nanobots. Components can be evolved to be maximally efficient, but the overall structure and controls must be rigorously verified to ensure safe operation.

    Just as aside, the grey goo scenario has already happened at least once on Earth. It's just more of a greenish goo, with some collections of larger un-goo-like structures.
  • by handy_vandal ( 606174 ) on Friday November 17, 2006 @06:10PM (#16890912) Homepage Journal
    a bomb that goes off and small (but not nano) pieces of jagged metal (let's call them 'shrapnel') get shot through your body at very high speed. pretty revolutionary, eh?

    Back in the eighties, a friend of mine quit a job (programmer) with a defense contractor, when he found out:

    (A) The firm was making cluster bombs ...

    (B) from dark-red plastic, because ...

    (C) plastic isn't revealed by x-rays, and red is hard for surgeons to see during surgery.

    The point was not to kill large numbers of people, but to injure large numbers of people in such a manner as to require lots of expensive medical personnel, thus winning the war by attrition.

    Immoral? That's a judgement call.

    Cost-effective? The defense contractor thought so.

    -kgj
  • by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Saturday November 18, 2006 @02:12AM (#16894136)
    Without self-replication, nanobots will get absolutely nowhere. Using current tech, it takes ~ 40 years to build a functional nanobot (it needs to be done atom by atom). The only practical way of changing this is to get some microscopic workers in to help speed the work along in an exponential fashion, thus nanobots making more of themselves.

    Read what I wrote. Making individual nanobots capable of replicating themselves is a mistake. Allowing nanobot model A to build nanobot model B, and model B to build model A is much different. You have have the chance of a runaway scenario if you make the control channel for each nanobot separate. Keeping nanobots A and B mostly separate from each other is even more secure.

    Are you sure that you are not just being overly paranoid. Nanobots are not some disgruntled slaves just looking for an opportunity to rebel. Also, note that these things do not have much in the way of mass (think just a few million atoms at most), forget processing power. you want these things to run AES on themselves??? So what is one nanite out of a hundred gets a bug, it probably won't last long anyway. also note that nanobots are delicate systems and it takes a lot of effort to get even theoretical ones which work. Having one which could work after getting a mutation would probably the the engineer who designed it the equivalent of a nobel prize.

    Most likely to be of much use nanobots will need at least as much processing power as current desktop PCs, probably more. Even if they are totally headless and controlled via wireless it makes sense to encrypt the communications channel and make the nanobot shut itself down in case of a fault. Don't forget that not only are random mutations a concern, but also intelligent hackers trying to make the nanobots do things they weren't supposed to do, perhaps using other nanobots. The reason self repair is dangerous is that it involves autonomous self modification, which introduces more possibilities for undetected errors in operation. For instance, the worst case is when sensors fail, causing the nanobot to believe something is broken when it's not. This leads to what the nanobot believes to be valid repairs which actually introduce unwanted behavior. In terms of pure numbers, *eventually* humanity is likely to produce more nanobots than there are biological cells. At that point, evolution is clearly a concern.

    First of, removing autonomy defeats their purpose to a large extent. it is not really possible to use these things effectively if you have to keep them in a tank of exotic chemicals just to keep them from falling apart. Evolution probably won't come into the design of these things even if we wanted it to be there. refer to my previous point about mutation in these things.

    Modern medicines are basically just complex chemicals but can be injected into the bloodstream. It's not hard to create inert chemicals that could be used as the signaling device for nanobots in the human body.

    Repeat after me until it sinks into your head: Nanobots are not out to kill me. Nanobots are not out to kill me. Nanobots are not out to kill me.

    Neither are viruses, bacteria, or prions, they're just reproducing and mutating like nature intended. The side effect is that sometimes they kill us.

Kleeneness is next to Godelness.

Working...