Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
IBM

IBM Creates 1st Single Molecule Computer Circuit 148

Llowfyr writes "Yahoo has reports that IBM researchers have created the first ever single molecule computer circuits which may someday lead to a new class of smaller and faster computers that consume less power than today's machines. The IBM team made a `` voltage inverter '' -- one of the three fundamental logic circuits that are the basis for all of today's computers -- from a carbon nanotube, a tube-shaped molecule of carbon atoms that is 100,000 times thinner than a human hair. IBM scientists will present the achievement today at the 222nd National Meeting of the American Chemical Society being held in Chicago and it appears in the web edition of the ACS' journal Nano Letters."
This discussion has been archived. No new comments can be posted.

IBM Creates 1st Single Molecule Computer Circuit

Comments Filter:
  • Geez. Didn't you guys learn anything in chemistry?

    A crystal is a single molecule. A transistor is a single molecular structure. It won't work any other way.

    --Blair
    • Crystals arent molecules at all. Crystals are crystal lattices, which are all ionic compounds. molecules are covalent. Which one of you was it that didnt learn anything in chemistry again?
      • Yuh-huh (Score:3, Informative)

        by blair1q ( 305137 )
        Your education is shallow and misled.

        Diamond is not an ionic compound. It is composed of carbon atoms in the optimal arrangement to form covalent bonds. Tell me diamond isn't a crystal.

        Bonds are rarely 100% covalent or 100% ionic. A crystal is a single molecule (but not all single molecules are crystals). These facts are old. Very old. Older than your textbooks. Shame on your school.

        I won't get into the semantic argument about solids, liquids, gases, and how any of them can be said to form or be formed by crystals, because that would only confuse you (and because in the more intricate cases I'm bound to forget the details and my book is in another state). Just trust that the definition of "crystal" that you are using here is very inadequate.

        Go search on a few things:

        Ionic Bond
        Covalent Bond
        Ionic Character
        _General Chemistry_, by Linus Pauling*

        --Blair
        "We teach chemistry like it's either a foreign language about a dead religion or a way to make the neighborhood kids think we're cool."

        P.S. I'd like to thank the Academy for down-modding my original post. It's always nice to see that the forces of intolerant ignorance continue to crawl the planet. It keeps an exterminator of such things in poker money.

        * - the Dover 1989 reprint of the 1954 edition [amazon.com] only costs like $14. How much did you pay for a semester of your college's misapprehensions?

        • A crystal is a single molecule (but not all single molecules are crystals). These facts are old. Very old.

          If this (and the comment before it) suggest that crystals cannot be formed by ionic bonds then you are totally incorrect. Most crystals are formed by ionic bonds. Diamond and graphite are exceptional in this respect. The result is not "one molecule". If it is placed back into a suitable solvent it will dissolve (e.g. NaCl).

        • I'm not saying the parent is wrong... only that it's posts like this that make me wish there was a -1, asshole moderation.
        • First off, you're correct for the most part, but just to clarify this a bit for the other readers.
          The whole separation of bonds into ionic and covalent bonds is a little like dividing daylight into night and day. There is difference, but it's _not_ a division. As with night and day, there's dusk, noon, evening and dawn and every shade in between.

          The carbon compouds like the nanotubes and other amorphous carbon structures fall somewhere between ionic crystals and covalent molecules. For example substances like Titanium Carbide contain a whole bunch of indistinguishable bonds ranging all the way from fairly ionic to purely covalent.

          The way covalent and ionic bonds are taught as exclusive alternatives, like two different types of bonds, gets torn apart after highschool when the bond gets looked at the way it should be. Roughly, as the relaxation of the electronic wave function around nearby atoms into a stable structure in the given temperature.

          The division into covalent and ionic bonds may be a practical one (especially for people not at all interested in quantum/computational physics) for some compounds, but one shouldn't forget that there's a whole range of stuff between NaCl and Diamond.

          Modern physics is quite embarrased that it has let the 1900's picture of litium become it's symbol, because for almost a century the physicists
          haven't concidered electrons as orbiting balls.
    • Yeah, you're way ahead of the IBM scientists...
    • As classically understood, molecules are fundamental chemical units composed of atoms in precise amounts, types and arrangements. Molecules can't be subdivided without changing their chemical properties.

      Crystals are not molecules because their constituents need not appear in precise proportions (a water molecule on the other hand is ALWAYS 2 hydrogen and one water), and because you can break them into chuncks that have identical intensive chemical properties. Crystals have basic units which are molecules or single atoms and combine to form the crystal lattice (often with trace impurties which are important). Crystals are chemcially bonded together, as are many things, but this does not make them molecules (according to the classical definition).

      Things like diamond, polymers, DNA, and nanotubes have come to challenge the bounds of what people label as molecules. Many people, news media, and some scientists have come to accept a broader conception of molecules as being any stable, complete (as in not attached to something), and strongly bonded (doesn't usually spontaneously disassociate) compound. Myself and others I know tend to consider this looser definition to be a foolish disregard for the important aspects of the previous definition.

      Knowing that what you are studying is the smallest unit with the properties you are interested in is a powerful piece of information. Similarly knowing that this basic unit requires a particular arrangement of certain atomic types grants you the keys to understanding it.

      As far as I'm concerned crystals are chemical compounds or chemical aggregates but not molecules. Same for polymers (unless the context makes it important to distinguish 40-unit from 41-unit and every other length of polymer, etc.). DNA is a molecule because every single arrangement is important to how it functions and no piece has the full chemical functionality of the whole. Nanotubes, on the face of it, seem to be polymers and thus not molecules (though I don't have enough depth in the matter to say for sure.)

      So we have the first logic process made out of a polymer, but it's not a specific molecule that does the job. I'm glad chemical bonds hold their tubes together and I'm glad they make our standard transistors possible, but chemical bonding != molecule.
  • but when can I actually buy a computer with this technology? 10 years from now?

    I like to see research of this type, but there needs to be more research with short-term effect.


    • I would want to buy one of the first nano-computers as well but I think we both would be dissapointed initially. The problem is today's machines are already over powered for what most I would be more interested in something that takes advantage of the smallness and lots of extra CPU power. As it is today's desktops are way overpowered for most applications. MY computer compiles all my code in a blink of an eye and if you lowered the CPU speed by a few hundred megahertz, I would probably not even know the difference. What I am waiting for are nano-computers integrated in nail polish, wall paper, and clothes with verbal interfaces like Star Trek TNG. Would it be sweet to have you clothes download the next style automatically instead of buying new clothes or wouldn't it be cool to say "computer, play cnn news", and your whole wallpaper turns into a television screen playing the news.

      With embedded nano-based technology this will be a reality. I have serve ADHD and if I can have a computer do real research with a verbal interface and advanced AI to interpret what I ask, and retrieve the data, I could write a research paper in a third of the time. No more library visits! It's all retrieved for me. I love LCARS on star trek's enterprise D where you can receive and information you wanted just by asking.

      My guess is the first generation of nano-desktops will be mediocre because they will run the same software as today, or Microsoft will take years to write a version of windows for it so it stays locked up in R&D labs for years. Kind of like IA-64 syndrome. It already runs Linux but Intel wont release it because Microsoft is not done writing windows for it. I guess the business world does not see reality existing outside of windows. Sigh.

      Anyway the extra apps like IA, verbal speech recognition, advanced clustering, pixel generation, and advanced networking would come years after the technology is out. Perhaps the Gnu community can address these needs as corporations will try to propritize the market and exploit it for high prices.

      • "I would want to buy one of the first nano-computers as well but I think we both would be dissapointed initially. The problem is today's machines are already over powered for what most I would be more interested in something that takes advantage of the smallness and lots of extra CPU power."

        I was cuting and pasting two paragraphs and I scrwed up.

        I meant to say " The problem is, today's machines are already over powered for what most people actually use them for. I would be more interested in something that takes advantage of the smallness and lots of extra CPU power."

        Boy do I feel like an idiot.

      • download the next style automatically instead of buying new clothes



        You're kidding me. You're letting some commercial tell you what to buy instead of getting what appeals to you? As soon as the commercial tells you that your clothes aren't good anymore (but last year they were a necessity), the clothes suddenly go from good to bad? What changed from last year?

    • Well I suppose you could have been a little more short sighted about your comment, but I am not sure how.

      Think about research in general. Intel had computers that would function at or above the 1GHz threshold nearly 10 years ago. Plastics were invented before WWII by government contractors but didn't see mainstream use until the late 40s early 50s. And even more importantly than that, computers were invented by research insititutions way before you could have purchsased one for your personal use.

      So I wouldn't really be complaining too much. Good things come to geeks that wait.

      If everything was "invented" with only the quick time to market approach in mind, then we would have lots of crappy inventions with only no long term possibilities.

      Oops My cell is ringing. (Hummm . . . total mobile communication, that was available to military units in vietnam but not publically available to the public until much later.)
  • Ask Slashdot: IBM Creates 1st Single Molecule Computer Circuit

    Erm, what was the question again?
    • Dude, I think it's slashdot jeopardy. We are supposed to give our answers in the form of a question.
      What groundbreaking achievement will IBM announce soon? :)

  • How does this affect the recent discussions about Moore's law ? There were doubts it would not hold for much longer. Are these nanotubes in the calculations ?


    Well... this surely looks like another great step towards high performance computing!

    • Re:i wonder (Score:2, Interesting)

      by blair1q ( 305137 )
      Moore's law [intel.com] will hold for quite a long time, inasmuch as it's already been crocked by adapting it to apply to microprocessor computing power, when it was originally developed to describe memory-chip bit capacity.

      Once it starts to break down for silicon-transistor circuits, the "capacity" metrics will be transferred to whatever follows.

      The interesting thing about Moore's law is that it may be unprovably vague.

      Einstein posed a theorem he said he never could prove:
      If you travel from point A to point B at an average speed of v miles per hour (where B and A are more than v miles apart), there will always be an interval exactly v miles long that you will transit in exactly one hour.
      How this relates to Moore's law is if you replace distance with transistor count, then along the way you will find intervals where you have doubled the transistor count in 18-24 months.

      This feature allows hypesters every once in a while to prove to themselves that "it still works" to whatever precision they desire.

      But they're not entirely dishonest, since this only works because Moore's law has a long-term stable average.

      --Blair
      • by Guignol ( 159087 )
        Well, you're traveling along your path without "jumping" blablabla,
        so you have P(t) your position on the path at time t, so that P(0)=A and P(T)=B (it took T hours, (T>1 because (B-A)>v miles) to get from A to B)
        So let's call F(t) the function that gives you at any time (in hours) the distance you are going to travel the next hour according to the trajectory P.
        Clearly, F is continuous on [0, T-1] because P is continous on [0,T] and F(t) is P(t+1)-P(t)
        F could be always equal to v (v=(B-A)/T).
        It would mean your speed is constant during the travel and is v, so that at any time, you are going to travel v miles the next hour.
        Clearly, F can't be always less than v, because then your average speed would obviously be less than v.
        Also, F can't be always greater than v, because then, your average speed would obviously be greater than v.
        So, either F(t)=v either we have 2 instants t0 and t1 (t0
        F(t0)>v and F(t1)v
        Since F is continuous on [t0, t1] there is a value t, to
        So who is that Joe Einstein you're talking about ?
        Anyway, I understand your anology, but I don't get your whole point (seems that you wanted to say several things at once or I'm just too tired)
        • by Guignol ( 159087 )
          Yes...
          it is:
          we have to instants to and t1 (to lt t1) and to and t1 within [0,T-1]
          and with
          F(t0) gt v and F(t1) lt v
          or
          F(t0) lt v and F(t1) gt v
          so, since F is continuous on [t0,t1] there is a value t, t0 lt t lt t1 such that F(t)=v.
          So, F(t)=v has at least one solution within [0,T-1] and the question is thus answered.
          blablabla....
    • How does this affect the recent discussions about Moore's law ?

      It doesn't affect them much. It's good to have a proof of concept for a nanotube NOT gate, but it leaves open questions of manufacturing and connectivity which would be crucial to creating a real technology from this kind of circuit element. I don't think anyone doubted that nanoscale gates were possible, but what is more questionable is how they can be economically assembled and effectively interconnected.

      Tim
  • by Anonymous Coward on Sunday August 26, 2001 @06:14PM (#2219356)
    ...to even see a beowulf cluster of these ;)
  • More importantly, the output signal from IBM's new nanotube circuit is stronger than the input

    Ok, lots of smart people on /. someone explain this please. Because the article sure doesn't!

    • This is possible when there's an active power supply involved in the circuit, in addition to the signal itself. The signal enters the circuit and the presence of the power supply allows for the amplification of the resulting signal.
    • Re:Gain? (Score:4, Informative)

      by dr. loser ( 238229 ) on Sunday August 26, 2001 @06:42PM (#2219408)
      Gain is a common figure of merit of transistor-based amplifier circuits. The gain of a voltage amplifier is defined as the ratio of the size of the output signal to the size of the input signal. An amplifier that could take a 0.5V amplitude sine wave as its input and produce a 5V amplitude sine wave as its output has a gain of 10. You don't get something for nothing, of course - the amplifier has to be connected to an external power source.

      A transistor is a three-terminal device. In a typical computer chip, these three terminals are called the source, the drain, and the gate. For a given voltage between the source and drain, the current that flows into the drain is strongly dependent on the voltage applied to the gate. That's what allows transistors to be used as switches: you can make a transistor that won't let current flow from source to drain unless the gate voltage is turned up past some value.

      Achieving actual gain in a single-molecule device is important. Without gain greater than one, it's not possible to efficiently chain large numbers of transistors together to manipulate signals. A strong input would get degraded with each stage of transistor manipulation, eventually falling to a level too small to drive subsequent transistors.

      There are *many* problems with the idea of using individual molecules to replace Si devices. Achieving a gain > 1 is a necessary but by no means sufficient step for eventual molecule-based computers. As a physicist, I think it's important to recognize real achievements in this field, but not to buy into the hype unquestioningly.
  • huh... (Score:3, Funny)

    by piecewise ( 169377 ) on Sunday August 26, 2001 @06:20PM (#2219367) Journal
    Huh? I don't get it.

    Dammit. Back in my day, we had real transistors, and silicon. We made chips out of SAND, dammit! None of this molecule pish posh. I ain't never gonna use some computer made from plants. You new-age scientists sure are ungrateful...
  • Why is this exactly under "ask slashdot?"
  • by KewLinux ( 217218 )
    ...but does it run Linux?
  • This article is also on news.cnet.com [cnet.com].


    (In case you may want to check)


  • Since IBM has successfully made a NOT gate out of a single molecule, they have made about 1/3 progress towards realizing a complete computer system made out of molecules. In fact, if they could make NAND gates out of these nanotubes, then they have everything they need to build a computing system since a NAND gate is functionally complete. Question is, does this mean that in the near future, the government will be able to implant invisible microchips in people for identification and tracking purposes, and what does this mean? Is this a bad thing looming in the future?
    • The government, changing every 4 years, doesn't care about you. But some private corporations and peuso-organizations (RIAA, MPAAA, BSA, etc) would be very interested in tracking people the worst way possible. You should worry about these, not about a useless "government" thingy.
  • 100,000 times thinner than a human hair

    There is hope for us blonds yet.
  • The next step (Score:1, Informative)

    by Anonymous Coward
    What IBM need to do now is make an AND gate. The output of an AND gate, fed into a NOT gate (NAND), can form the building block of any digital logic element you care to name (gates, registers, etc.) Then they need to figure out how to join them together, and get signals in and out. Then figure out if a nanotube processor would actually be useful! :> Anyone know the theoretical switching time of these type of devices?
  • I love to see the advancement of human knowledge, especially when it bodes well for making faster and smaller computers. It will probably be ten years before we see direct consumer benefit, but, hey, this research all has to be done sooner or later.
  • When, why, and because of whom did the human hair become the standard unit of distance. This IBM circuit is 10 microhairs, great. Exactly how big is a hair again? I've got hairs of varying sizes on my scalp, my eyelids, and my lip. Which one of them is the SI standard? Is there a man at the national institute of weights and measures who is the caretaker of the reference hair?

    Just tell me big the damn thing is in regular units: meters, angstroms, astronomical units, whatever.

    • Yep, he's the same guy who instigated the use of "football field" as a standard measurement.
    • It's 50-100 microns in diameter. Most people have no idea what a micron is, and the article is a popular press item. I suspect that is also why they mentioned the AND, OR, and NOT gates instead of simply the functionally complete NAND gate.

      Still, kudos to them for posting references to papers published by the research group.
    • Because not everyone is a techno bigot. A hair is thin. Something that is 100,000 times thinner than a hair is REALLY thin. Very easy for us human types to understand.
      • A hair is not REALLY thin.

        A typical scalp hair is about 4/1000 of an inch. A standard machine shop tolerance is 1/1000. And there is a big difference between 0.999 and 1.000 when you're trying to fit an exact 1.000 bushing into a hole. One will give you a press fit, and the other a slide fit. 4 thousands of an inch is often not even an acceptable tolerance.

        But since Joe Average has never been seen a vernier caliper in his life, he's got no clue how thick a piece of paper or a human hair is. All he knows is that it's thin and that if something's 100,000 times thinner, it's VERY thin.

        Then again, I have trouble imagining anything smaller than 1/10,000th of an inch. There are just some things that are very hard to visualize.

        bart
    • When, why, and because of whom did the human hair become the standard unit of distance? -- Jeffery Baker

      Hair diameter is a tried-and-true, reputable engineering metric. Every engineer has talked about something being a 'CH' or 'RCH' or 'BCH' too big or too small for a given application. It's therefore very natural that multiples of hair size would be used to describe other very small distances.

      Perhaps this is only true for engineers of a particular generation and older. But it's a usage with plenty of tradition. :)

      --Trevor
    • A hair is usually about 40 microns is diameter (that would be 40 millionths of a meter). I doubt validity of the claim that the inverter is 1/100,000th of this width.1/100,000 of 40 microns is 4 angstroms (4 E -10 meters). An average atom's radius is about 1 angstrom. Are they claiming that the inverter is 4 atoms wide?


      I doubt that in the extreme. 40 atoms I might believe - perhaps some journalist made a typo / miscaluation / misquote and added an extra zero to 10,000.


      As the saying goes, "Don't believe anything you hear, and only half of what you read."

      • Just as a follow up to my theory, I looked at the actual paper (available here: http://pubs.acs.org/journals/nalefd/asap/pdf/nl015 606f.pdf ), and though I don't have the patience to actually read it, one of the diagrams shows the nanotobe in the order of 50 nm, about a hundred times bigger than the yahoo's "1/100,000 of a human hair". Damn sensationalists.
  • ...which may someday lead to a new class of smaller and faster computers that consume less power than today's machines.

    Jeez.. I hope whoever to persue this "smaller, faster, more efficient" idea a raise.. What a novel idea...
  • by EvlPenguin ( 168738 ) on Sunday August 26, 2001 @07:06PM (#2219452) Homepage
    I found the full paper here [acs.org] (that's http://pubs.acs.org/hotartcl/nalefd/nl015606f_rev. html for you paranoid types).

    I was just thinking - they say their NOR gate is the size of approx. 1/100,000th the width of a human hair. Well, today's 1.4 GHz chips contain ~22 million transistors. That would make it 220 human hairs wide. That's a lot of power in a small space. I can't wait till the day I can crack RC5 on my cell phone.
  • The problem is: (Score:2, Interesting)

    by Tazzy531 ( 456079 )
    ...which may someday lead to a new class of smaller and faster computers that consume less power than today's machines.

    The problem is that with all this power, we still have lazy programmers that aren't writing cleaner, more efficient code, basically negating all the advances that have been made in processing technology. I mean, computers today are a million times faster than they were years ago, but do we see any major increase in speed?
    • Hehe - Moores other law of computing:-

      "The actual speed of operation is a constant."

      Eg - for each breakthrough in transistor density and such like, there is some naff mathematically nice nonsense like Java which runs like a lame donkey after a heavy meal, and is practically useless unles you can boost the power of your computer by an order of magnitude :)
      • Actually, it is perfectly possible to make really fast applications using Java (I do)... You just have not be a dumb programmer.
        • Funny cars and top-fuel dragsters are fast. They get about one tenth of a mile to a gallon of fuel. Java is the same with CPU usage. Actually it's more like putting your Yugo on a bullet train. It'll go 300 MPH and it'll use no gas. If the JVM is written well and your machine is fast, you get a fast execution. It really doesnt' make much difference what you're capable of.

          Java epitomizes lazy programmers. If you want your code fast and efficient, write your own memory management and storage structures. Write your own threading. I'm guessing you'll be compiling and linking it, not running ing a virtual machine.

          And yes, I'm a Java developer. But only after almost 10 years of C/C++.
    • My first reaction to your troll is "Blow me!" But when I get beyond your asinine superior attitude I can make an intelligent comment.

      Code today is written to the specifications given and time allotted. It's done with the tools and information provided. We all know our code could be better. We know there are techniques out there by which we would benefit. The quality of the product is directly related to the financial gain and risk.

      It's just like any other business. We could be riding in ultra-safe cars, with ultra-efficient engines. We could be living in bigger homes that cost less to heat or cool. Hell, they could even make a scissors that's safe to run with. But it's not cost effective.

      And then there are those who think Java is the best computing language. You think that garbage collection is free? You think that Hashtable or Vector is an efficient way to store your information? You think the JVM isn't wasting cycles you might put to better use? You talk about lazy programmers.

  • Just wait till we have add-ons that are 1/100,000 the width of a human hair!

    Now where did I put those molecular-tweezers???
  • by los furtive ( 232491 ) <ChrisLamothe@NOSPam.gmail.com> on Sunday August 26, 2001 @09:19PM (#2219688) Homepage

    Check out the pictures and graphics [ibm.com] that IBM has made available.

    And let us not dwell on the fact that I submitted a better version of this article early in the morning with more links than the one they decided to go with(sulking ends now).

  • Someone pointed out that IBM just needs to create a similar AND gate, as anything can be made out of AND gates and Inverters. However, no one has mentioned that the same thing can be done with OR gates, as the NOR gate is a universal gate as well. In my opinion, its more of a pain to work with, but hey, whatever works for IBM is fine with me.

    "Who is more foolish? The fool, or the fool that follows him?"
    Obi-Wan Kenobi
  • Three fourths of NASDAQ listed corporations announced single molecule revenues for the third quarter. Analysts insist that such predictions are hardly in good faith, and that the majority will be lucky to turn revenues amounting even to a single quark. Wall Street bravely marches on, into this blossoming nano-economy.
  • Argh!

    The choreographed pace at which they're releasing this 'new' tech is such a stupid joke.

    Read a few headlines down, (or up), where they're talking about successes in neuron/computer engineering techniques.

    Oh, goody.

    You do realize the League of Evil will require people to plug their brains in directly at some point? And the morons who suck up the Cyberpunk daydream where this is actually something desirable, (what? There are idiots like that present on Slashdot? Oh my!), are being used to buffer and in fact sell this horror to the world?

    Yep. Sell it to the tech-heads, and you shape the world. The tech-heads have almost all the social muscle these days and not even they seem to fully realize it.

    Why do you think it's so miserable to be alive if you live life as you have all been told? That is, working 8:00 to 6:00 jobs. Sucking up social programming which serves to render impotent relationships, one of the most powerful forces of stability and good energy; now perverted into over-sexed, short term, disappointing & miserable transactions. Thanks to James Cameron, the perfect boyfriend must now die of hypothermia in the North Atlantic, for crying out loud!

    We've been programmed to eat unhealthy food with too many chemicals. Jeezus! Bread with everything. (There's almost no worse food combo out there!) Leading to poor health and further misery.

    Enter the tech-heads.

    Why do you think there have been so many episodes of Star Trek made with Holodeck fantasies? Do you think the Forces of Evil would allow such a virtuous show as Star Trek to exist if it wasn't the carrier for some toxin?

    Grr.

    Is nobody tuned into the same station as me? Am I the only one who can see this shit? Is nobody else scared out of their freeking minds? (Well, actually I'm not really all that scared; I'd describe my reaction as being something more akin to a fascination on an anthropological level. Watching exactly how the end of the world arrives is possibly the most amazing thing I'll ever see.)

    Still, I can't believe that people are going to actually line up to be the first to plug their brains into the Matrix. Man! Now that is a sell job!

    I mean, isn't face recognition in Borders Books already creepy enough? No! People want Microsoft and Echelon and **AMERICA** in their heads at night when they sleep! Digitize awareness! With everybody plugged into 'Friends' and 'Ally-McBeal,' nobody will even notice, much less rebel when the sky falls.

    Part of me almost hopes that somebody does drop a vial by accident and wipes out 5.9 billion people on this globe. I'd almost rather take my chances at being one of the lucky survivors than continue watching this bullshit parade and the naivete of all the silly viewers.


    -Fantastic Lad. The Craziest Fuck In ANY Room!


    P.S. Most artists and media producers don't even realize where their ideas come from. Population control doesn't happen on a surface level anymore. Hasn't for a long, long time.

  • Not to detract from there accomplishment, I am sure they really did do it if they said they did it and this is really exciting stuff.
    But IBM is not unaccustom to doing this sort of press release simply for the publicity of it.
    I seem to remember a press release (which they had to buy add space to get it published I guess) back in the early 90's. 92 or 93 maybe. They claimed to have created the worlds first 1024 bit cpu.
    I wouldnt suggest they are building this stuff for PR. I am just saying that is the purpose of the press release (just like most articles of this sort). Oh ya, someones cool project at IBM needs to keep getting funding of course.
    Just dont assume it will be useable for anything practicle in OUR lifetime.
    It took 20 years to get from 8 bit to 64 bit. And most of us use 32 bit just like we did 10 years ago.(this refers to commodity hardware, not the big iron).
    Ah screw it.Never mind. Its cool stuff no matter what the press geeks do with it.
    • Even better is that while IBM might be the first, it will be companies like Sun that make it work for everyone. Look at 64bit CPUs. All of Sun's servers re 64bit, and IBM still pushes out 32bit machines as "low end" RISC machines.
  • I see new devices with more easily controlled parasitic capacitance and inductance because of the dimensions of carbon nanotubes. This will be good for high frequency, high power applications as well as logic circuitry. Carbon nanotubes "want" to be certain sizes depending on the number of carbon atoms in a ring of the tube and the presence of dopants like boron or potassium. These things might make good diode laser drivers. Focused arrays of laser diodes could be an interesting way to nano-manipulate colloidal materials or proteins. Follow the links from here [sunysb.edu] on Optical Tweezers.
  • So now I need to call a quantum physicist to get tech support... great....

  • "voltage inverter?" (Score:3, Informative)

    by Lally Singh ( 3427 ) on Monday August 27, 2001 @09:29AM (#2220950) Journal
    Call it what it is, a NOT gate.
  • How do "holes" move? When a hole moves, is it not actually an electron moving from one place to another -- leaving a new hole in the place where it left, and filling the hole in the place where it ends up?

    If so, I don't see the difference between electrons carrying current and holes carrying current.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...