Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science

Self-Assembling Nanocomputers 147

A Semi-Anonymous Coward writes: "According to this article a researcher at Harvard University has developed techniques for self assembly of nanoscale wires that operate without resistance due to a property called ballistic conductivity. He hopes the research will provide an 'end run' around convential top-down circuit designs, allowing much smaller, faster and more energy efficient computers."
This discussion has been archived. No new comments can be posted.

Self-Assembling Nanocomputers

Comments Filter:
  • Since there's no resistivity, that means that calculations will be almost instantaneous, right? And it will have very low power consumption, no waste heat, and be incredibly small?

    So this sort of thing could easily mean that we could have tiny computers that run for a long time on a single battery and are ninety billion times better than anything we currently have, right?

    I just came.

    • "Since there's no resistivity, that means that calculations will be almost instantaneous, right?"

      Wrong, it would be the case if electricity was flowing infinitely fast. See the lack of resistivity as electricity flowing infinitely well (but still flowing at the speed of light). Actually, the speed of electricity is a little less than the speed of light in silicon, but it's insignificant.

      "And it will have very low power consumption, no waste heat, and be incredibly small?"

      Hopefully :-)

    • by tonyc.com ( 520592 ) on Monday November 12, 2001 @04:45AM (#2552866) Homepage
      Resistance, being futile, is not responsible for the light-speed limit for electron flow. That's Einstein's fault. However, if the circuit is considerably smaller than current designs, then all the electrical pathways get drastically shortened and processing gets faster anyway...

      Excuse me, I just had an image of a 55-gallon drum of these things sitting by my computer, quietly self-replicating into a Beowulf cluster of a billion-odd submicroscopic quantum computers. It could solve every computational problem currently on the books in the blink of an aibo, render all cryptography (except OTP) useless, and probably faithfully emulate the intelligence of several myriad Ph.D.'s long enough to invent a higher consciousness for itself, becoming an unimaginably transcendent cerebral being to which humans would seem as advanced as bacteria.

      And think of the Quake framerates!
    • Since there's no resistivity, that means that calculations will be almost instantaneous, right? And it will have very low power consumption, no waste heat, and be incredibly small?

      I am afraid that most of the power in modern integrated circuits is capacitive not resistive, and though ballistic conductivity would reduce the dynamic heat disapated by signals and eliminate the static heat in the wires the overall difference would not be that great as most of the power is used to switch transistors from one state to another.

      So this sort of thing could easily mean that we could have tiny computers that run for a long time on a single battery and are ninety billion times better than anything we currently have, right?
      I am afraid not, though it may take a while longer to fry an egg on a processor implementing this technology :-)

      Having said that any reduction in static (read useless) heat generation would increase the processor speed as you would be able to increase voltage and hence MOSFET switching speed with the same overall heat generation of a processor no using this technology.

      PS: Moderators what are you on, the parent post may be inaccurate but it is NOT a troll.
    • As others point out, even ballistic conduction doesn't get you past the speed-of-light limits. You are also going to have to cool the computers down to close to absolute zero if you want ballistic conduction over very long length scales (otherwise atomic vibrations will disrupt the flow of the electrons).

      The limits on power consumption have relatively little to do with electrical resistance and a great deal to do with erasing bits. As Landauer [aeiveos.com] and Bennett [aeiveos.com] have shown, you can compute for essentially free but you have to pay a price of generating entropy (heat) when you erase bits. To achieve the really significant increases we have to move from non-reversible architectures (all current commercial computers) to reversible architectures that minimize the number of bits erased. Michael Frank [aeiveos.com] is one of the leading people working in this area.

      As Drexler [aeiveos.com] discusses in Nanosystems, using reversible rod-logic nanocomputers, one should be able to get from our current 10^9 ops/sec chips to 10^21 ops/sec in 1 cm^3 before one hits the heat removal limits. So the anticipated throughput increase is ~10^12 ops which a trillion vs. your estimate(?) of 90 billion. But it isn't going to run on a single battery. Its consuming (and radiating) 100,000W. Interestingly enough, since such a nanocomputer has ~10^3-10^5 times the processing capacity of the human brain in 10^-3 times the volume such a computer is probably worth a million or more human brains (if we can figure out how to program it...).

    • Not quite, but close.

      The circuits will have zero resistance only when they are in a stable state. Your static memory will not consume power provided you don't try and read it. Unfortunately, most interesting bits of computing will involve changing the electronic states, so there will still be power consumption, and trouble getting rid of heat.

      Carbon will probably be the new silicon. It has a big 10eV band gap, and you can make it a resistor, a semiconductor, a conductor, or a superconductor by rearranging the bonds, without doping. If we can crack the self-assembly problems, then you may get a mole of bits in a few tens of grammes of material. Which may not be instantaneous calculations for no energy, but it is pretty good to be going on with.

      Making a whole computer is also possible, but this may take a little longer.

  • by Exmet Paff Daxx ( 535601 ) on Monday November 12, 2001 @01:45AM (#2552666) Homepage Journal
    But since this is a Harvard researcher [harvard.edu] being written up in the Harvard press, my hype-o-meter is on the alert. Then I read this:

    Lieber has "philosophical differences" with the industry's "top-down" approach to nanotechnology--taking big things and making them smaller. "The way to truly revolutionize the future," he says, "is to take a completely different approach: build things from the bottom up."

    Pardon me, but have these philosophical differences yielded even a working flip-flop yet? The world is littered with "proofs of concept" that are too difficult to implement. I'll admit that this technology is extremely promising, but at this highly experimental stage of development it's hardly time to go bashing the accomplishments of the semiconductor industry. Unless, of course, you're trying to drum up press for yourself.

    That said, sounds pretty cool. I'll be even more interested when they can form some basic logic circuits with it.
    • Already, his lab has produced a transistor just 10 atoms across.

      Do you read the articles, or do you just bitch?

      Daniel
    • ...when I read articles like this I tend to get really excited about the cool tech and the possiblities that they offer. However, it will take a long time for Startling New Advances to make it into our daily lives. People have speculated on the suppression of technology by the government or industry -- but the truth is, it takes a long time for technology to be adopted by a manufacturer. No conspiracies necessary, just a simple fact of economics. How many garage semiconductor factories do you know of? It takes an incredible amount of resources to fund a foundry like AMD or Intel... our combined buying power influences companies like them but only on a 5 quarter plan. Their vision is narrowed down to what works NOW, what can they afford NOW to make more money later... What they demand are deliverables, which is exactly what the Harvard article spoke about. They have created a transistor 10 atoms across. Great, they can now get funding and see what else can be done. Until a process is developed which can be reproduced with the same yield as first generation flat panel displays, that is, when they figure out how to make things cheaply and reliably that are fundamentally useful, with a minimal amount of failures... major manufacturers won't be going near this or any other breakthrough technology. You see, that's what funding is for! To find out if it can yield something useful.

      I am interested in how they fare with packing together multiple transistors, like one for red, another for blue, another for green...oh yeah the resolution would be phenominal. Might it also be possible for this to lead to display devices we popped onto our eyes like contacts?

      So yeah, it gets discouraging when you think about everything that is possible with what humans know, and compare that with what you can actually buy. Just try to think outside the 5 quarter plan.

      Remember :: It isn't illegal to dream. Dreams can become visions which guide our actions today. Together we forge the future.


      • Wheres FMD? Its been completely finished for 3 years now, and no manufacture is touching it because they all want to support DVD, and support technologies approved by the movie and record companies. Technology which is too powerful to control, is surpressed for as long as possible until some small company begins selling a product based on it, THEN big companies jump into the picture because they have no choice.
    • where researchers have already created tiny logic circuits and memory--

      He reckons he has already created logic circuits.

      And there wasn't that much bashing of the semi conductor industry. Semi conductors are what is availiable now, whats wrong with someone saying they want to try something completely different because they think it will go further?

      And does't just thinking that it might be vaguely possible, just make you swoon?
  • I think I'm going to need a new job, sell my house, sell my stereo... Once anybody in the commercial world gets a hold of this, you know no-one will be able to afford it.
  • After a while they will just self-assemble into a quake-IV-playing machine, but without having to worry about any sort of lame CRT-based frame-displaying device. Then you will never be able to make them do any sort of useful work.

    All that technological progress... just for the ultimate game of quake. Hmm... sounds like a day at work...

    (well, if I had work, that is. I think it would get in the way of playing quake, though...)
    certron
  • by Rosco P. Coltrane ( 209368 ) on Monday November 12, 2001 @01:52AM (#2552683)
    "Another unusual property of Lieber's nanowires is ballistic conductivity"

    With a statement like that, I bet half of the Army's decision-makers are already lining up to fund these guys.

  • Since there's no resistivity, that means that calculations will be almost instantaneous, right? And it will have very low power consumption, no waste heat, and be incredibly small?

    So this sort of thing could easily mean that we could have tiny computers that run for a long time on a single battery and are ninety billion times better than anything we currently have, right?


    Sounds like magic to me. If it's too good to be true, it probably is.
    • "Any sufficiently advanced technology is indistinguishable from magic." -- Arthur C. Clark
  • [sarcasm]
    If it just assembled itself into a beowulf cluster of multiple instances of itself ;-)
    [/sarcasm]

    Yes, that WAS lame...
  • Has someone finally designed a working Von Neumann machine?
  • Great, negligible resistance means nearly no heat, which means godawful small transistor sizes and separations. Cool! Nanotechnology is showing it's potential.

    Thanks,

    Travis
    forkspoon@hotmail.com
  • by nobodyman ( 90587 ) on Monday November 12, 2001 @02:23AM (#2552745) Homepage
    I know it's jumping a ahead a bit to talk about computers assembling computers (this really only talks about the assembly of wires.. but its the direction they want to go). But haven't we covered the major properties by which we define life?
    1. Metabolism
    2. Growth
    3. Reproduction
    4. evolution

    With reproduction added to the mix, it can be argued that 3 of 4 of these benchmarks are covered. Whose to say that the fourth, evolution, wouldn't follow naturally?

    ps: Once these nano-machines develop opposable thumbs, I think we could be in trouble.

    • 3. Reproduction

      Last one to http://wirese.cx is a rotten egg!
    • Well, right, that's a decent guess.

      The problem is, you need reproduction with variation. Reproduction on it's own doesn't qualify something as being life.

      Without variance in reproduction, something can never evolve. You have to remember that.
      • Programming the thing to make a random modification every 100000000 produced wire is not a difficult step to take....
      • Seems to me that if these are released into the environment you'd inevitably get reproduction with variation. Environmental stressors of all kinds could result in a 'faulty' reproductive act, which is essentially what any random biological mutation is - and that's the driving force of evolution.

        Max
    • With reproduction added to the mix, it can be argued that 3 of 4 of these benchmarks are covered. Whose to say that the fourth, evolution, wouldn't follow naturally?

      Using that logic, FreeBSD should be developing itself by now, since it's been able to replicate itself from source for years (make world). :-)

    • Sharks haven't evolute for ages ;) They're not living? Visit Florida ;)
    • So, you say we sooner or later we will be the ones who give birth to the Borg? ;)
    • Reproduction is not required for life. Life can be defined completely on the basis of metabolism. Both mutation and selection are required for evolution. The problem is you have to have relatively intelligent life to figure out how to engineer metabolic components so they do not wear with time (you can avoid wear entirely at the molecular level). If you can replace or repair damaged components, one need not have any requirement for reproduction and any evolution desired can be entirely self-directed. The fundamental problem with molecular nanocomputers is radiation damage. Decay of radioactive elements and cosmic rays provide enough energy to break molecular bonds. As a result you need a fair amount of redundency and majority logic to have molecular computers with reasonable lifetimes. Drexler [aeiveos.com] has covered this extensively on pgs. 154-160 of Nanosystems [foresight.org]
    • They have made evolutionary electronic devices. I seem to remember it was a New Scientist [newscientist.com] article but I don't have a copy to hand. An FPGA was trained to differentiate between two frequencies, which was accomplished with far fewer gates than traditional designes, and the final array contained a block of gates that weren't linked to the main block, which if removed stopped the device from functioning.
    • Apologies, I could not leave this one alone as it applies to most computerized things.

      Metabolism (chemical process to maintain life)
      Water Cooling?

      Growth
      Bigger case? Networking, Dual Procs?

      Reproduction
      2 computers now, 3rd 'real soon now'

      evolution
      went from win 3.1, 95, 98se OS X and Linux, need I go on?

      Joking aside, I suppose if these things do fullfill thier aim of making "better computers" you'll look and see a tiny a tiny G4 tower with an Alpha/Power4 chip inside.
      Ok, I lied about the joking being set aside.

      I suppose if the above happens, people will still wonder how to eject the cd, or wonder where the any key is.

      If these things become too powerful, don't worry I'm sure Win-nan-dows XP^N will ship shortly thereafter.
  • ummm...that's all.
  • ... always make me think something like, "Go, Homo sapiens, go."
  • Organic nanotech (Score:4, Insightful)

    by Overcoat ( 522810 ) on Monday November 12, 2001 @03:04AM (#2552801)
    The Israelis came up with a dna-based nanowire a couple years back. There's some talk on nanotech mailing lists about using ribosomes (the things inside cells that assemble proteins from instructions encoded in RNA) as organic nano-assemlers. Theroretically (once someone figured out how to code RNA to produce the right molecules), the ribosomes could be used as self-assemblers to churn out miles of organic nanowire. You could even code robosomes to assemble other ribosomes, thus exponentially increasing output. The only costly part would be the (gold) electrodes.
    • You need to study a bit more molecular biology. Ribosomes can turn out the protein components of ribosomes, but you will need RNA polymerase to turn out the RNA subcomponents of the ribosomes.

      While ribosomes are indeed nano-assemblers they are limited to assembling proteins with the 20 natural amino acids. Scientists are working on extending the genetic code but its going to be a rough road. The problem with DNA or protein based wires is getting them to self-assemble into functional systems. Nadrian Seeman, one of the few scientists who has actually built stuff, out of DNA has said that one of the problems is that when you throw a few million long molecules equipped with self-assembly properties into a test tube the problem is keeping them from assembling into a tangled ball of strings.

  • by HalfFlat ( 121672 ) on Monday November 12, 2001 @03:15AM (#2552810)
    There's certainly a lot to be said for the 'bottom-up' approach to nanotechnology. Cost for starters! One issue though is, how does one address these very tiny devices?

    The problem with a whole bunch of identical tiny circuits is of course that they're all identical - there's no way to differentiate between them. There will have to be some way of distinguishing and interacting with these units.

    A couple of ideas spring to mind though. One is to encode the position of one of these units in the unit itself as it is being assembled, by interacting with some sort of precisely engineered field. What would work (if anything) depends very much on the chemistry, but it could be something as simple as a gradient in an electrostatic field, to aligning with a very fine grid of polarized light. There are options, but it all sounds Hard. Schemes like this could attack the problem of differentiation, but there's still interaction and addressing.

    One way to solve the addressing problem is to bypass it almost entirely. If these structures are sufficiently small, and can be engineered to act as a giant grid of finite-state automata with evolution rules based on neighbouring states, one can simulate a computational device with a version of Conway's Life on speed. Input and output can be done at the edges of the constructed array, which is probably going to be more simple than trying to address the middle of the structure. The problem lies in initialising the state of the array - clearing it is probably easy enough, depending on how state is stored, but priming it with a state that admits the computational task desired seems to be almost as hard as addressing the cells in the first place.

    Another approach might be to give each cell some random state as it is constructed (and there should be plenty of sources of randomness at the molecular level to draw on.) Imagine that this state corresponds to an "activation key": when an appropriately modulated high frequency EM signal hits the cell, it pushes it over into an active state. Before this, it's effectively off (perhaps an off cell would simply propogate signals from its neighbours and do no computation). Give each cell some way of indicating that it has been activated (eg, it emits some light upon activation), and then fire random keys at the cells. This solves the addressing problem, and the interaction problem (one could use the same key for changing the cell's state) - but then one has no easy way of telling how the newly identified cell connects to the other addressable cells.

    Do any slashdotters have any ideas? Or can point to literature where these problems are (ahem) addressed?
    • Interestingly enough, people at both Bell Labs and IBM are working on these issues. Here is a URL [nanodot.org] for the discussion of the Bell Labs progress on Nanodot [nanodot.org]
    • The problem with a whole bunch of identical tiny circuits is of course that they're all identical - there's no way to differentiate between them. There will have to be some way of distinguishing and interacting with these units.

      Is this not the same as neural networks, where tiny identical circuits with the ablitiy to store their current state are joined by fixed connections (which could be nano wires). This could be a highly efficient way of building neural networks by having each node physically manifested as a circuit.

      The problem lies in initialising the state of the array

      That is nearly an identical problem to hardware neural networks, to solve this a method of training is required (though getting a neural net to function exactly the same as a digital computer is not easy).

      Nice ideas from you anyway - they may work, i will have to think about them.
    • just one problem with your "polarized light" and "modulated HF signals": their wavelength exceeds even todays structure sizes. Guess why people are working on UV lithography: visible light has too high wavelength (hundreds of nanometers). "normal" HF has even higher wavelength (e.g. 3 GHz = 10 cm, remember: visible light is several hundred THz). If you want to use EM radiation for small (1 nm) structures you have to options: use near field optics (keyword: SNOM, Scanning Near field Optical Microscope) or use X-rays.

  • Well, aside from the obvious environmental and geo-political implications of self-replicating machines - there is another important aspect to such machines. Copywrite enforcement.

    Just as magical as it would be to make a stable batch of these machines which would reliably work (even in laboratory conditions) - the thought of how these things would possibly be kept from being altered or copied ad infinitum is equal in terms of seeming implausibility.

    What methods might work?

    Making the constuction materials be of some "special" molecules? Not likely to keep people from making unauthorized copies before too long, plus it makes engineering potentially more difficult.

    Adding extra logic to each one to ensure legality? Aside from again the engineering aspects, it is hard to even brainstorm minimally plausible ideas.

    Harsh legal enforcement? The sheer convenience of these micromachines would ensure demand is high enough to bypass any law short of complete totalitarianism based on the product. This would be more than yesterday's computer, internet, or cell phone demand - once applications development hit mainstream programming, and then mainstream consciousness, the demand would be levels of magnitude higher than anything we've seen.

    The only reliable way I could think to make these machines properly profitable would be to use societal paranoia and fear to convince everyone that these machines are dangerous, and only sell them to 'licensed technicians for clean-room-only use'. But this protection of profitability would only last so long before demand creeped back up, or some major catastrophy renewed the fear factor.

    Everything about this sounds like it might make a good story though.

    :^)

    Ryan Fenton
  • Robots that can assemble themselves sound great and all...

    but can they disassemble themselves and put their parts into the correct bin before its time for bed?
  • Quite frankly, electronic equipment that can re-arrange itself without any outside help scares me. [imdb.com]
  • Nanobots set up for desctuction and stick a couple of millions of those into the water supply, there goes the people.... Don't you guys think nanobots could be used as an efficient weapon?
  • still (Score:1, Interesting)

    by Anonymous Coward
    Optical based circuts

    are practical,afordable, and constructable with current technology.

    Best of all rudimentry optical circuts have

    been used for some computing components such as
    the orange macro powerbook excelerator
    HUGE number decompilers etc.

    On top of all this the could use 90% of the light spectrum thus allowing for at circtus aproaching the speed of light PER a spectrum.

    No need for nanites other than style points then.
  • T-1000 here we come!
  • Every hardware-scale-advance news article will describe Moores first law.
  • .. these can make 7of9 :)
  • This article somehow makes me think of the Borg in the Star Trek series. The Borg apparently use some sort of self-replicating nanobots/nanocircetry to controll their drone systems.

    I remember that in one Voyager episode, the Ferengi attempted to lure Voyager into som sort of wormhole in order to kill the crew, and get hold of 7of9's Borg Nanobots, since they where extremely valuable. ... okay. No more Trek. :)
  • Now that we can build each other, it will never end...
  • <sarcasm>
    Wouldn't having indentical reproducing robots be a violation of the DMCA. Wouldn't one copywrighted robot be plenty?
    </sarcasm>
  • These thing could be graet for storage, IF you run out just tell them to grow some more and use them to store your data, the only problem would be security :)
  • Self assembly is how the body builds a lot of its internal structures. I did a bunch of work on this in my doctorate - basically you can get some reasonably complex structures (e.g. a virus shell) from a small set of repeating sub-units.

    One of the common structures found in all cells are 'micro-tubules' - long cylinders made of repeating tiles of a protein called, imaginatively, 'tubulin'. They look a bit like a coil of rope; technically it's most common form is a '4-start, 13 unit helix'.

    Now the place these protein structures are found *most* commonly is in neurons, which are crammed to the gills with these things. And there is a (way-out, whacky, widely discredited, completely batshit, but still very cool) theory that the way our brains actually work is not just at the synapse level, but at the sub-cell level using these microtubules. (This would add maybe another 5 orders of magnitude to the available computing power of the brain if it were true; these suckers are small and there's *heaps* of 'em!).

    The idea (and it keeps cropping up in papers 'cause it's just so appealing :-) ) is that computations can be done using a 'game of life' like system of electical charges on the outside of the microtubule, where each unit adops an electric polarity, and then 'flips' it's neighbours depending on a simple set of rules. It's a very cute idea, completely lacking in anything so crass as experimental evidence.

    These days of course no one believes a word of it.

    <false modesty>For some dodgy work on nanoscale self-assembly, and for some half decent pictures of microtubules, check out my thesis at nanoscale simulation [pegacat.com] </false modesty>

  • Self-assembly is very cool. Unfortunately this isn't an example.

    He mixes the components together but then pours them onto a matrix. Then he mixes the next one and pours that on the previous one. So still cool, but not "self-assembling"


    Self-assembling structures like proteins and DNA do exist, and are more useful. DNA is an example of a structure which includes positional info (i.e. addressing) which an earlier poster indicated would be important.

    Likely a cell is a good example of an ideal machine. It's very complex, but it includes power source, self-maintenance and assembly. These little parts he's building (they're not even "machines" yet) don't address these issues.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...