Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
IBM

Billions of Transistors on a Single Chip 151

cgi-bin writes, "IBM has reportedly developed technology to create "tens of billions" of transistors on a single chip. Intel's pentiums only have 27 million or so. The technique uses electron beams instead of the traditional optical lithography. "
This discussion has been archived. No new comments can be posted.

Billions of Transistors on a Single Chip

Comments Filter:
  • by Anonymous Coward
    It seems to be talking about electron beam epitaxy (spelling?), which is really old news, but how do they overcome the speed problem (which was always the real and only killer)?

    There's no such thing as electron beam epitaxy. You might be thinking of molecular beam epitaxy, which is a method of growing single crystal thin films, but has nothing to do with this.

    Electron Beam lithography, which is what this article is about, has been under development for some time. It is even used currently in some research applications. You are correct that the main problem with it is that you must scan an electron beam over the wafer. A process which is extremely slow, requiring 6-8 hours to pattern a single wafer. Perhaps, the IBM people have developed a way to speed up the process. If so, this could be big news, but it's hard to tell from the article.
  • by Anonymous Coward
    What have they done to solve the whole "quatum" issue of electrons jumping from one wire to the next?

    Sure, tiny stuff is good stuff, but so what if it won't work...

    On the up side, now 1.4e10x23 angels will fit on the head of a pin!
  • by Anonymous Coward
    I love it! As a conoisseur of a good piece of all-American hokey, the press release is a work of art.

    For those who haven't read the piece, IBM's technique allows them to put slightly more transistors per square inch, but its main benefit is that it allows them to manufacture much larger chips. So, it's technically true to say that they can get "tens of billions of transistors on a single chip", but most of that benefit just comes from what is effectively joining multiple chips together.

    It's a neat piece of technology, but it hardly justifies the hysterical boosterism of this. Phineas Barnum would have been proud.
  • According to The Register [theregister.co.uk], this IBM's 0.08 micron process. I just read an article on a IT-newspaper that all CPUs currently in development are being developed for 0.10 micron process (such as Elbrus 2K).

    sektori.com [sektori.com]

  • "it's the coolness that counts, not the practicality!"
    Congratulations, jd. You've just written yourself a perfect sig line for yourself.
  • E=electromotive force (measured in Volts)in Ohm's Law (although I've never figured out why current in Amperes is represented by I).
    In E=MC^2 the E stands for energy, measured in somethingortheothers, but not necessarily electrical energy.
  • The smaller the chip the lower the power the lower the heat the lower the resistance and the faster the chip. Also shorter run lengths so that is a big factor as well. For example a computer with an end to end switching speed of 1 nanosecond would require a max wire runlength from any point to any point could not exceed ~1ft.
  • cmon man, it would be operable as soon as you could boot linux, all the rest would be background.. the disk would be moved from a critical bottleneck component, to a secondary, backup type device.

    Course, youd have to be sure you flushed everything before the power goes out.. :/
  • Well, one thing I would like to point out is that human brains were not designed for memory, they were designed for association, pattern matching, and thinking in general. Computers are great at remembering things, however, they are not designed for thinking (currently).

    The two together are a wonderful syngery.

    As for the storage of the mind, it is all the abstract and multilayed relationships stored in the mind and somehow mapped to synapse interactions, that are the truly meaningful information in the brain. And this information we do not understand fully (or peven partially according to some). I can assure you, however, that to store all this information would require many orders of magnitude more storage than a single gigabyte (DVDR).

    My signature is concerned only with the physical code for the body itself (hardware) which is DNA, which is roughly 620-630 megabytes of information. Not with consciousness or mental state (software) which is incredibly complex and ever permutating and extending itself while running stop the hardware (brain) built according to this gentic program.
  • Unfortunately genes and base pairs are quite different. Genes occur within the billion or so long base pairs chains, but to recognize them takes much effort and ingenuity. It is this process that makes genes sort of patentable (or absolutly patentable depending on who you talk to).

    So, encoding your DNA is no big deal, there are companies that will sequence yourt genome for you. Making any sense of the information however, is many times worse than reading cat /dev/hda.
  • Moore's Law (doubling 18 months; 10x five years)
    is a relentless exercise of technology.
    You need inventions like these to keep on track.
    Kind of like social security bankrupcy-
    electrolithography, copper interconnect, etc. keeps it going "for the next
    ten years" while pessimists think it will end
    at that time.
  • an apathetic stance toward this. yeah, so you
    might not see it as all that important - but look at it this way: if they can fit the tens of
    billions on a single chip, think how small they could make a chip that only needed 27
    million?


    We're apathetic because, as someone said earlier, this is ancient news. Just being able to etch a leeeeeetle tiny impression doesn't solve any of the real problems of making useable high-speed computing circuits at the atomic level.

    Yeah, maybe it's cool that they actually made a box and did it, but I don't think anyone was questioning that ability to do so. In fact, I don't wonder if they did it for reasons other than pure science (big DUH, here). I'm sure there's some other company coming out with competing technology, and they just wanted to show em up.

    And things like this are never bad for the friday stock run.

    --
    blue, corporate conspiracy theorist.

    --
    blue
  • Anyone who's downed enormous amounts of liquor simply for the obvious, out-cold result knows that your consciousness is highly overrated!
    -Jer
    -Jer
  • I'm not aware of any reasoned estimates of how many bits it would require to represent a static snapshot of a consciousness. Some upper limits: "It couldn't possibly represent more than ... bits, because ... ", but you need to fill in the blanks yourself. I've heard estimates based on the number of synapses/cc, etc. What I haven't heard it a good estimate that the maximum is even within an order of magnitude of the actual.

    One person gave an estimate based on the idea that we probably weren't remembering at more than x bits/second (I forget what x was, but it wasn't huge), and that there were thus an so many seconds per year and most people didn't live more than 75 years. But I think that he was vastly overrating the average number of bits/second that were remembered. I would put it at less than 1 (averaged over several decades). Still, consciousness is a bit more than memory. One also needs stack equivalences and heap equivalences. How many bits do you keep active in short-term memory? Probably less than a megabyte. Considerably less. Most imagery depends on either a fill-in-the-gaps representation, or on refresh- from- external- sources. If I close my eyes, the image of the room in my memory looses tremendous amounts of detail.

    My feeling is: We might not fit entirely on a CDR, but we would probably fit on a DVD.
  • Except this is IBM, the company that drops millions on R&D, and they do *tons* of it. There's infinitely more realism in this article than the than "noise in the fabric of space-time" one.
  • is how fast this process is. No one disputes that electron lithography is better than photolithographic techniques (no stepper needed! arbitarily fine beams!), but it's always been bog slow compared to photolithography.

    Electron lithography is essentially a raster process; a beam sweeps across the wafer "cutting" away silicon. In contrast, photolithorgaphy is like taking a picture; the whole wafer gets exposed at once. Until now, at least, using a photresist has been orders of magnitude faster than beam-etching techniques.

    So I wonder, how have they done this? Multiple beams per wafer? Arrays of emmiters? Super-fast HV electron optics? What?

    Kind Regards,
  • Thanks for the explanation. I 've found a picture of SCALPEL [bell-labs.com] (the competing Bell Labs effort) here [vacuum-solutions.com]. As an electron spectroscopist in a previous life, I found your reply to be the most informative article in this topic so far.

    That said, a couple of questions:

    • What's their electron source? Surely not a heated filament, as you imply. Thermal noise from even a low amperage filament overwhelms space-charge effects, e.g., about 1.5 eV for heating v.s. about 0.5 eV for space-charge in a microamp beam. Are they using field-emission [britannica.com] sources?
    • Secondly, is the limitation on "brightness" a space-charge/energy effect or simply low-flux sources?

    Kind Regards,
  • MC^2 != IR.
    E=MC^2 is the equation stating the total amount of energy "frozen" into a given amount of matter.
    1 gram of matter is "worth" 0.001kg[(300,000,000m/s)^2], or 9x10^13 joules of energy - any processor that dissipated that much heat energy would probably resemble a large thermonuclear device more than a computer. The E in E=MC^2 just isn't equivalent to the E in E=IR.
  • Actually, IBM already uses EBL in the production of some specialized mainframe chips. I would provide a link to the article that goes over this, but it is on IBM's intranet. :(
  • He is referring to the difference between the present optical method (like flashing an entire slide image on a screen) versus scanning the electron beam across the area (like painting a TV picture with a beam, but with many more dots). Flashing an image with all the chips across the entire silicon wafer is much faster than having a delicate electron beam scanning back and forth through each of the millions of chip features..for all the chips on the wafer.
  • The human mind is a quantum computer, just watch Jeopardy, you'll figure it out.

    --
  • I saw an e-beam litho lab being put together at DEC in '84-'85. It was so sensitive that the vibrations of a truck driving by would throw it off. So they sat it on a 20 ton block of concrete which was on rubber bearings, like earth-quake bearings.


    I guess it takes a while to get this stuff near the production phase !

  • >I'd be surprised if the etching gun sold for less than $50 grand.

    I'd be surprised if it didn't 'sell' for an order of magnitude or two above that...
  • >measured in somethingortheothers

    Joules, I believe...

    Sure isn't equivalent to Volts, though 8^)
  • I'm not sure if the interconnect lengths have that large of an effect compared to the gate capacitance on the MOSFET's. By decreasing the area of the gate on the transistor, they can up the clock speed since the capacitance goes down.


    Also, by lowering Vdd, you actually slow the chip down. While it helps with the power produced in the chip, the electron mobility decreases in the semiconductor material. This decreases the current flow and hence charging time on the gates. (correct me if I'm wrong, but I think this is how it works).


    I think the main problem is balancing the transistor sizing and the heat density in the chip. As you said - smaller transistor, higher clock speed. But as the density goes up, so does the heat developed which necessitates lowering Vdd.

  • The G4 is made on a .22 process. The Leff is .15. Motorola used this as a marketing ploy, I think.
    --
  • CuMines with on die cache have over 22 million if memory serves me correctly. Hence the yield problems....
  • (Hope I didn't just post this when I hit the wrong button!)

    EBL has long been thought of as the next step (many are surprised that photo has lasted this long), but there are still many great challenges left if the industry wants to continue using MOSFETs. Chief among these I would say are the gate oxide and leakage currents (both gate and channel leakage). As the lateral dimensions shrink, traditional scaling reduces the oxide thickness as well. Right now the oxide is only about 30 Angstroms thick (6 monolayers in crystaline Si, happily SiO2 is essentially amorphous), so we have to reduce the voltage we apply across this oxide. This leads to the second problem, turning off the transistor. Given that the turn on voltage is relatively low, there is a reduction in the ratio of on vs. off currents. This is bad for obvious reasons (ever wonder why your Athlon or PIII is so warm?). We need to find alternatives to continued scaling of the oxide (several papers have suggested that 10-15 Angstroms is a hard limit to thinning the oxide), or better yet get a new gate ox material with a higher K (dielectric constant). We also need to move to SOI (thank you IBM for helping push this) in order to try and control the off current.

    It is nice to be able to draw such small features, but to make usable devices requires a lot more than just the lithography.
  • Does anyone out there know anything more concrete? This article is awfully vague, and besides, would you really trust a journalist who doesn't know the difference between an "order of magnitude" and a "quarter of magnitude". It seems to be talking about electron beam epitaxy (spelling?), which is really old news, but how do they overcome the speed problem (which was always the real and only killer)?

    Any info welcome.
  • I remember about two years ago, IBM releasing a press release talking about how they'd been granted a patent on an electron-beam technique that would allow them to create .08 micron features (the same size claimed in this article),
    but that we probably wouldn't need such a beast until 2005. I'm thinking 1) this is the same technique, but with a better prototype, and 2) They miscalculated the timeline.

    -
  • I was thinking of SCALPEL, apparently.

    -
  • I don't really know what I'm talking about here, but a transistor using 6 atoms would certainly have features that are less than 6 atoms... unless they can make a transistor out of a random smattering of electrons aimed at roughly 6 atoms.
  • I did make a point to say my PHYSICAL self. I do believe that my physical self is a container for my conciousness, however.
    ---
  • Synapses, the connections between neurons, are at least partially related to your conciousness if not the entire basis for it. While there is evidence that some of the synaptic connections occur based on genetics alone, a vast quantity can only be formed by experience.

    So in order to store your "self" you'd need not only your DNA matrix, but also a complete map of your brain's neurons and synaptic connections.

    Also, due to the somewhat random formation of organs, any irregularities (one limb longer than the other, different vision in one eye etc) would not be copied by either method and would necessitate a complete map of every cell in your body. While this may not be directly associated with consciousness, if you have any abnormalities of any glands that produce behavior altering proteins then that would not necessarily be carried over.

    The idea that genetics could incorporate the entire contents of our consciousness is implausible for a number of reasons:
    1) Our genetic makeup does not change (for the most part) over time. There are isolated circumstance of change to DNA, but not through all cells and not following a pattern (ie. mutation due to radiation). Corrolarry to this is the fact that it would be assumed that additional experience would necessitate additional DNA. No DNA is "gained" through experience, thus this would imply that there is no DNA record of consciousness.

    2) Genetic code is too small. The amount of information stored in the average adult brain is vast. The amount of information in our DNA (even if you include mitochondrial DNA) is not nearly enough to account for this information.
  • I'm not an expert on the new tech, but I'm imagining the electron gun they're using is probably prohibitively expensive for use with a TV. I'd be surprised if the etching gun sold for less than $50 grand.

    Something with that amount of control isn't going to sell cheap.
  • I was lowballing it :)

    The idea being that even the most jaded (and rich) of TV viewers probably wouldn't spend more than $50 grand on a TV.

    I too think we're probably talking millions, not thousands. It'd be sorta like using an aircraft carrier to do water skiing too (way overkill).
  • Some quotes:

    "the technology would not only allow the manufacture of much smaller components (potentially down to the atomic level)"

    "the wavelengths of electrons is five quarters of magnitude shorter, so it is basically an open-ended resolution media that for all practical purposes is limitless"

    "The demonstration system was used to create components at 0.08 microns, or 80 nanometers. But Pfeiffer said the system could have been designed to produce even smaller components."

    "We can extend the resolution downward," he said. "We don't really see a limit at 50 or 35 nanometers, which is many years away."

    Okay, I don't know what you were smoking, but try to use less of it before posting next time...

    Chris
  • the comments so far are rather negative - an apathetic stance toward this. yeah, so you might not see it as all that important - but look at it this way: if they can fit the tens of billions on a single chip, think how small they could make a chip that only needed 27 million? smaller, faster computers - means more powerful laptops, pda's, etc. pardon me for seeing this as a *good* thing...
    --
    DeCSS source code! [metastudios.com]
  • Forget the beowulf in the cigar box, this little baby'll fit in a matchbox. Now what do I do for a screen?
  • Maybe I'm reading too much into this statement. But E-Beam is done in a lab 100 feet from my office on a daily basis. It's been done to 20nm at Cornell on standard equipment. The problem with E-beam is that you have to draw your components. Photolith, OTOH, is a parallel process whereby the entire wafer is exposed at once. And that's why it is fast. The E-beam system here would take the greater part of a month to pattern an entire 3 inch wafer.

    My best guess is that IBM has demonstrated 2 things. A reliable columnated electron source (it may already exist) and some sort of electron "optics" (I know that makes no sense). But some reliable way of manipulating wide streams of electrons. The trick would be to make a 6 inch wide electron beam that has really good homogeneous characteristics.

  • Research is being done. But Silicon has such nice electrical properties is so easy to fab compared to most other materials that all other contenders end up filling only niche markets. (III-V materials come to mind)

    Optics are a pain in the neck to fabricate right now in any form. Someday, maybe, but not yet.

  • To simplify my last post. Photolithography is used because it is fast. E-beam exists today, but is very slow. IBM has got a fast E-beam system. This is the breakthrough, NOT smaller transistors, which have been made already, SLOWLY.
  • Actually, we've still got a good margin before electrons start jumping beween wires... The smallest lab transistor to data is ~30nm and I don't think the electrons tunnel between wires yet at these distances... I think the bottleneck will be with the CMOS technology, not the electrons switching wires...

    What's next... Quantum Computing???
  • What is conciousness? How do you know our conciousness isn't 'embeded' in our genetic make-up. The human mind is so complex science still can't understand it. Maybe conciousness is just a combination of neves and synapses interwoven into our memoties. Trippy...
  • now they've just gotta use this technology in the production of the crusoe processors.. I wonder just how small they'd be able to make them? hmm.. mabie my next watch will be powered by a mini-crusoe.. :)

    ...or powered by an overclocked mini-celery.. but then I'd end-up with a nice burn on my wrist..
  • Smaller devices means smaller device resistance, which means less energy dissipated across the device, which means less heat given off. Basically, if you have the same power supplied to the chip, and it has the same surface area as another chip with half as many larger transistors, the heat given off shouldn't be significantly different. I'm about 90% sure of this anyway. :)
  • I think I read somewhere that identical twins do NOT have identical DNA. They start out the same, because the zygote splits or something, but there are many minor mutations that occur after it splits, so that their DNA is not the same (even if crime labs can't positively tell them apart yet.)

    Identical twins don't have identical fingerprints which, to my knowledge is purely a function of DNA.

    Correct me if I'm wrong, folks.
  • must be programmed into every nerd's head. that was the first thing I thought of when I saw the item...
  • At bootup your system would load your entire drive image into L1 memory,

    And I thought NT was a slow boot process...
  • Interconnect capacitance, especially in deep sub-micron processes, has a significant effect on the performance and power. At 1um, you could ignore the interconnect delays. At 0.25um and below, the transistor delay is insignificant compared to interconnect delay.
  • It's not the size of your transistor that counts... it's how you use it...
  • IBM may well be making leaps and bounds with regard to advancing microprocessor technology but what they need to be doing is developing the technology neccesary to produce these chips for market

    If just one manufacturer would put chips of this nature into production within 12 months; computer technology would gain another five years worth of development in as little a 6 months, Intel would instantly go bust, IBM would make a fortune and AMD would make a lesser know: probably superior but ultimately cheaper clone :)

    Roll on the 21st century!
  • I did read the whole article and it states that tooling won't be available until 2002:

    The alpha tool probably won't be completed until the end of 2002, Pfeiffer said.

    What I am basically saying is that IBM will develop this technology, fail to realise its potential, wait for ages and then cry about it when someone else does it instead.
  • I suppose its all dependant on one's concept of a short period of time: different things to different people......
  • but isn't the limit of transistors far above what IBM will be capable of doing? i.e. the transistors don't work once they are made up of less than 6 atoms.


    You should never, never doubt what nobody is sure about.
  • I'm no chip designer, but why are we sill using transisters?

    Any research being done on the possible use of lasers in chips?
  • by Anonymous Coward
    What's interesting is how some of IBM's current processors (such as the PowerPC 604e, 750 "G3", and once Motorola lets em, the 7400 "G4") really don't have that big of a transistor count compared to what Intel and AMD have. Especially when you compare performance. Imagine what they could do...
  • First off, this could genuinely allow people to produce high-end computers on a chip, which would be no bad news for the wearables and palm-top markets.

    Second, and perhaps more importantly, the same techniques used for the etching -could- be used to produce ultra-high definition TV. (After all, all you're using is a somewhat larger electron gun.)

    Now, I don't know about you, but I like the sound of computer monitors (or domestic TVs) capable of definitions of up to 500 billion lines. So what if nobody would be capable of telling the difference - it's the coolness that counts, not the practicality! :)

  • Yeah, but you could do some incredible jumps!
  • The advantage here is that realistic amounts of memory--128MB or more--can be put on chip with the processor. In effect, all memory is cache. This would be fantastic both in terms of speed and low cost.

    If you can make processors with this technique, I'm sure you can also make memory with it. So you would still have off chip memory that is far bigger than cache. So instead of 512k cache and 128M main memory, wouldn't you have 128M cache and a few gigs main memory...
    --

  • I bet What the researchers probably said was "significantly improve processor speed." This is an important point to make because anyone in the semiconductor industry knows that electron lithography is SLOW Well, the P in PREVAIL stands for Projection, and this comment [slashdot.org] describes how the e-beam isn't scanned, but an electron wavefront, analagous to an optical projection system. So the speed of wafer processing would indeed be increased over that of a focused electron beam. And this would be practically mandatory if you wanted to place 10^10 transistors per die.

    Of course, with feature sizes this small, processor speeds would be improved, too. So both interpretations are valid. But I believe they indeed said what they meant.

  • When used in Ohm's Law, as in E=IR, then E represents Electrical Potential, measured in Volts.

    When used in Fields and Waves (EE313 iirc) then E, as in "the loop integral of E dl equals 0," is often written in script or boldface, and represents Electric Field strength, in Volts/meter.

    And E as in E=MC^2 has already been covered in a prior reply. I think most applications are in Megajoules, though.

    Anyway, none of those really apply to the original comment, which was concerned about heat. The applicable law here is named after, erm... Watt, I believe: P=IE. When it comes to resistive heating, applying ohm's law lets us express that as P = I^2 * R. Deriving the units for P is left as an exercise for the student.

  • Given the time it takes to build and configure and initialize a fab plant, this is actually a rather short period of time. It takes anywhere from 4-7 years to gt a new fab plant fully operational, with 5 years the norm. Replacing the fabrication hardware, still takes many many months to years to get installed and setup.
  • How do you know our conciousness isn't 'embeded' in our genetic make-up

    Well, it is certainly influenced by it, but if that were the case, identical twins would have an identical consciousness, which we know not to be true.

    However, identical twins are often very similiar in behavior, even when separated at birth and raised in different environments. So genetic predisposition definately has a part in the development of your consciousness, personality, and intelligence.
  • Dont forget that a refined implementation of this technique would yeild lithographic etchic at the near atomic level .

    MMmmmmm nano nano
  • Your thinking too small!

    With this type of size reduction we could have chips at 3 gigahertz with 50 GIGABYTES of L1 cache. At bootup your system would load your entire drive image into L1 memory, and there would never be page faults or disk hits. Except to save, but that would be a background process and not affecting computing.

    Now THAT would be a fast system.
  • Sir, what you need is a cubic meter crystal of computational ecstasy.

    "All tasks can be designed / computed / evaluated / indexed / summarized / optimized / etc by a solid cubic meter crystal super conducting FPGA like nano scale self assembled ceramic block running at 90 gigahertz and chilled to 20 degrees kelvin running evolutionary systems of simulations / knowledge agents / neural networks / genetic algorithms / bayesian probability maps / self organizing pattern matching systems. All types of evolutionary systems layered and nested to unlimited levels of abstraction and complexity. Hardware capable of performing a centillion teraflops and holding a centillion terabytes of memory in core, operating at crystal speed. "

    yes, that is what im waiting for...
  • Well, not really.

    You could squeeze the instructions for building a physical body like yours onto a CDR, but your body is uniquely yours, and to save its configuration (meaning the output from the genetic program at this point) would still require orders of magnitude more information. Sorry, i misread your comment, i was under the impression you were saying you and your mind..
  • Even better, just sell giant wafers of programmable hardware so you can organize your wafer anyway you want. Use it as a giant FLOPPER number chruncher for a while, then realize your ready to use it for some ahrdcore gaming, and reburn the image.
  • True. I wish I could find out exactly how fast this process is going to be. If it is going to take three days to etch my billion transistor wafer, then Id rather suck a donkey.
  • Well, you could squeeze your entire physical blueprint onto a CDR. However, the thing that most of us value most, namely our consciousness, would require many orders of magnitude more space.
  • If you had read the article you would know that this technology would be quite inexpensive (for a fab budget) to implement with practical applications immediately.

    This isnt far out theoretical research, they have built the system, and scaling it to fab production sounds quite straightforward and practical.
  • That was UNTIL they figured out a different way to do it.
  • You forget the fact that this type of production is to be much faster than current optical methods. That would be where the real big payoff would be. Higher chip density is a plus, but coupled with fast fab would be great news for the industry.
  • Funny how Carl Sagan is remembered for that phrase -- when it was actually part of an act that Johnny Carson (?) came up with to make fun of Sagan. I gather hearing the phrase -- especially hearing it falsely attributed to him -- got really tedious to Sagan, though he eventually learned to laugh it off, and even used it to title his last(?) book.
  • Electron beam lithography is nothing new, nor is IBM the only one developing it. In fact, there have been much smaller transistors made (such as the 18 nanometer transistor [berkeley.edu] made here at UC-Berkeley using e-beam lithography).

    The drawback to direct-write electron beam lithography is that you have to directly trace the circuit you are trying to print in most cases, while in optical lithography you can expose an entire die (or multiple die) at once. There have been improvements made over the years, using techniques such as parallel writing, but it's still slow. Even using a more conventional masked resist and scanning the beam across the wafer using vector or raster methods, there are problems with electron scattering and such.

    This article is pretty short on technical content, so it may be that IBM has developed a way to make e-beam lithography fast enough to be used in a production environment for chips (it is already used for making photomasks). That would definitely be a significant development. We'll have to wait and see, I guess.

    Also, keep in mind that just because they have a lithography tool that can write 80 nanometer lines does not mean that the rest of the processing equipment (etching, planarization, etc.) could support it. There would need to be advances in those tools as well.

    My other question is, what do we do with tens of billions of transistors? If we jump three orders of magnitude in the number of transistors on a chip, is it really going to do us any good, at least with current circuit design techniques? I think testing a circuit like that would be a nightmare.

    -Jason
  • Quoting the article: "The demonstration system was used to create components at 0.08 microns, or 80 nanometers.

    Please alert me if I am wrong, but IIRC the smaller the transistor, the lower the power requirement (less heat), and the faster the chip (less distance from junction to junction). So if all they did was to make the same chips we have now on the smaller die size, there would be a reduction in power requirements and a speed increase, right?

    Not that I'm much of an expert in these things, but when IBM says they've got the tech Nikon has built and demonstrated a a proof of concept machine, this sounds like tech that's less futuristic than say, quantum gates, etc. I mean, Nikon isn't in this to produce a one-off demo machine -- what they're really after is the ability to put their machines into the fab plants. So the actual production of chips is probably still a couple of years off, but the technology would vault IBM ahead of just about every other chip maker on the planet -- ahead of Intel, Motorola, AMD, TI, and anybody else I may have forgot.

    My biggest remaining questioss not answered by the article are:

    1. with the smaller wires, is there a higher crosstalk problem at higher switching speeds, and
    2. Once they've got the tech completed, will it be an IBM only tech, or will it be something they license so that the rest of the world can benefit (Personnally, I'd like a faster Transmeta chip, a faster StrongArm, etc.)
    Let's hope this tech pays off and we all see the benefit soon.
  • The father of nanotechnology spoke in 1959 of etching with reversed electron microscope [zyvex.com]. He also refers to us in the year 2000 looking back...
  • And since the beam is not restricted by having to create a photographic mask, each chip on a wafer can be different. Memory areas could be customized ROMs with the latest OS and serial numbers burned in the hardware.

    And better yet, the entire wafer becomes a customized circuitry area. Design processor and memory modules and have them splattered across the entire wafer with assorted bus circuits, with each wafer different based upon customer demands. (Yup, you'd better design those modules to work despite some of them not working due to production failures...)

  • Just a nit-pick.

    The increased speed doesn't really come for shorter junctions. The difference in the amount of time an electron takes to cross .25 microns as opposed to .08 micron is close to negligable.

    The smaller components enable faster speed by decreasing the RC constant. RC stands for resistance times capacitance. Think of filling a bucket with a hose. A bigger hose lets you fill the bucket faster. It's also faster to fill a glass than a tanker truck with the same hose. You can think of circuits in the same way. It takes a certain amount of time to charge and discharge a circuit. Decrease the amount of resistance or increase the voltage so that more current flows and the circuit charges faster. Reduce the size of the circuit and you don't have as much to fill.

    Overclockers often need to increase the voltage when trying to increase speed. What's happening is that above a certain clock rate, capacitors that control gates don't have a chance to completely charge/discharge before the clock switches. If you force more electrons to flow down the same pipe by 'increasing the pressure', the caps will be able to completely charge/discharge and the circuit will work. Of course, if you try to push the force of a fire hydrant through a drinking straw...
  • Funny how I remember Carl using this phrase repeatedly in the PBS series "Cosmos", but I never remembering hearing it from Johnny Carson (of course, I never really watched Johnny Carson).
  • I came to this point one night. Why aren't we using light? Light is faster! Eventually we will be limited to the speed of electricity.

    Then I realized that you can't escape electricity. You would have to convert the light beams with optical switching sooner or later in the computers, and that would be an insane performance decrease.

    You cannot escape the fact that memory will always run on electricity. It just isn't practical to come up with a type of memory that is based on storing light.

    Also you run into the problems with refraction and the fact that if you want to make sure that each light beam doesn't interfere with the other beams during a processor cycle you would have to make the processor very large so you could sheild the indivudual light beams with an opaque material. Also, lasers get very hot. Its just not practical.
  • The advantage here is that realistic amounts of memory--128MB or more--can be put on chip with the processor. In effect, all memory is cache. This would be fantastic both in terms of speed and low cost.
  • The wavelength of visible light is already larger than the feature size of current generation electronics, so if we were to move over to opto-electrical computers they would actually be a lot larger (and thus slower because of the time taken for signals to travel anywhere). However using small lasers to communicate between chips could have some potential.
    Also as the other reply states, any useful optical device actually involves electrons transitions anyway to provide some kind of nonlinearity (you can't build anything useful such as a NAND gate out of linear componants).

    So has anybody found a URL which explains what advances IBM have actually made which makes this better than the E-Beam lithography that's been around for ages?

    Edmund Green.
    Nanoscale Physics Research Laboratory, The University of Birmingham, U.K.
  • >It just isn't practical to come up with a type >of memory that is based on storing light.

    No, it is practical to come up with the idea. It's just not practical building and using it effectively.

    Researchers at University of Colorado, Boulder designed an optical computer several years ago. There are several systems in existence where computation is partially or fully done optically, but this was the first (and the only, if memory serves me right) system to do everything optically-i.e. it had a memory system based on storing values optically.

    The system essentially stored pulses in a loop of fiber optic cable several miles long. I think the principle is analogous to very early electronic memory systems were bits were stored in forms of waves on tanks of mercury.

    Extensive info on this was published in some IEEE publications back then. I don't have the time to look for the URL now, but it will be helpful if someone can find the reference to it.

    --

    BluetoothCentral.com [bluetoothcentral.com]
    A site for everything Bluetooth. Coming soon.
  • Here is the link to the optical computer that I mentioned. The paper has a description of the optical memory system and logic structures.

    Stored Program Optical Computer(SPOC) [colorado.edu]
    --

    BluetoothCentral.com [bluetoothcentral.com]
    A site for everything Bluetooth. Coming soon.
  • First, I wonder if this paragraph is a misquote:

    Called PREVAIL, ... the technology would ... significantly improve the speed at which silicon chips can be processed, researchers said.

    I bet What the researchers probably said was "significantly improve processor speed." This is an important point to make because anyone in the semiconductor industry knows that electron lithography is SLOW, like orders of magnitude slower than optical lithography. That is why nobody has ever used it to make commercial chips even though the technology has been around for more than a decade. I would be interested to see some more technical back-up articale that talk about masking and throughput.

  • "s another side note to this, Lucent Tech. has an EBL system just about at proof of concept called SCAPEL. Hope this clears up a few of the wrong ideas and helps people understand what this is all about."

    Might I point out that here on slashdot back in October or November was an announcement that Lucent had reached a resolution of .05 microns using SCALPEL. Then let's not forget the UC Berkeley student who made an even smaller transistor two weeks later. Something like 0.018 microns or so. This IBM announcement is not really anything new. Here is the press release. [berkeley.edu]

  • With this technique you could build a do-anything chip based on Transmeta's technology which has dozens of cpu cores and an extremely powerful graphics processor. And the chip would still be tiny, ridiculously fast (many gigahertz), cheap, and still under 1 Watt. I need one!
  • Cool! I could just squeeze my entire (physical) self onto a CDR!
    ---
  • I would also like to see any references to descriptions of what kinds of optical elements you need to use in what setups to create real optical logic circuits (not convert-to-electrical-and-back type of logic).
  • Actually, this is not the only technology that promisses to bring lithography to even lower scales... There's also Lucent's Scalpel [asml.com], Intel's Extreme Ultraviolet [intel.com] (EUV) and an X-Ray technology from IBM. There all saying they're stuff is better that all other technologies... let's see which on comes first
  • It's all great and wonderful if they can construct devices that small, but the question is whether they will even work or not. Granted, the theoretical minimum device channel size is far below what is currently being produced (I think it's somewhere in the range of 0.02-0.05 microns, but don't quote me on that).

    Another problem to look at is the degradation of the device that can take place when things get that small. I'm sure people wouldn't be so prone to overclocking their processors if there was a chance they might completely destroy the processor in doing so.

    Another key, as was already stated, is that they need to bring the cost of the process down before it will ever see a production line. If the process requires a long time then it's likely to either create a bottleneck in the production line, reducing the overall output, or simply drive the price of the final product up a bit by forcing the company to purchase large numbers of the tool that performs the process.

    Granted, the savings that would result from a smaller die size and potentially a correspondingly small package size could make up for the price difference due to the new tools. I'm not sure of the exact number, but a large part (>50% I believe) of the cost of the chip is in the packaging (Which is why you'll find bins of scrapped wafers at any production plant.. why package something that isn't going to work)

    Another problem I could see in bringing the process to market is in contamination of the chips during production. As it stands now, lots of chips are scrapped because of skin cells, dust, etc landing on them during their trip down the line. With the smaller device size the smallest foreign particle size that could be tolerated would have to be smaller... so either clean rooms would have to get cleaner and their employees more religious in following the rules, or they would have to find some way to isolate the wafers from the technicians.

  • Well, as you stated yourself, reducing the gate area you reduce the gate capacitance. Thus you can still achieve the same charging time, albeit with a smaller voltage.

    And as for power consumption, yes, if you do full scaling where every part of the device is scaled down by some factor X then you get a reduction in power consumption. However, with the wonders of backwards compatibility and meeting external specs and such, oftentimes the devices are not scaled down using full scaling. In this case the voltage is kept the same and the device size reduced, which actually leads to higher power consumption.

    Of course it reaches a point where the power consumption is just obscene, at which point they reduce the operating voltage. And this really isn't a problem if you're going to put out a new chipset for the processor, just dictate what the voltages have to be. However, if you're trying to build in compatibility for an older chipset that doesn't support the lower voltages your chip requires, your SOL.

  • by tak amalak ( 55584 ) on Friday March 03, 2000 @07:36AM (#1228206)
    Here's an advantage: System on a chip. How does a chip with a processor (maybe 2 or 4), large full speed cache, memory controller, PCI bridge, maybe 256MB (1GB? who knows?) of RAM, ethernet controller, etc., all on one chip sound? Sounds pretty good to me. Get memory latency down to like 1-1-1 and blazing i/o performance. You would make the motherboard extremely small and save a massive amount of space. Since the memory controller and PCI bridge ar on the Processor itself, the pin out can be kept under 400 pins. With enough memory on chip, who would need memory slots? This is the future IMHO.
    --
  • by Ungrounded Lightning ( 62228 ) on Friday March 03, 2000 @10:11AM (#1228207) Journal
    This sounds to me like it's electron beam lithography, but not SCANNING electron beam lithography.

    Electric fields can be used as lenses to focus electron beams, forming images of a stencil, just as physical lenses can be used to focus photon beams.

    On one hand there's a complication because electrons mutually-repell and also affect the field that forms the lens, so higher beam currents tend to distort things somewhat.

    On the other hand, the lenses are formed by an electric field's natural curvature. So small-scale optical imperfections just don't occur in a good vacuum, while gross imperfections are easy dealt with by maintaining decent tolerances in the construction and excitation of the electrodes.

    Of course they COULD have made a breakthrough in scanning electron beam technology, and be talking about writing every chip one at a time. But that doesn't square with either the claims of "billions of transistors" and those of "speeding up the processing".

    Yes, they could get DENSITIES of billions of transistors. But writing them one at a time takes a while. And keeping the beam alligned across a large chip is a problem. (Though the latter can be solved to some extent by first laying out a set of location markers and using them in later steps to figure out where the beam actually is.)
  • by Pfhreakaz0id ( 82141 ) on Friday March 03, 2000 @05:37AM (#1228208)
    I can hear Carl now from his grave.... "And this chip is populated with billions and billions of transistors on a single chip, all interacting and dancing to a tune called ... COMPUTING!"
    ---
  • by Xenu ( 21845 ) on Friday March 03, 2000 @06:07AM (#1228209)
    John Markoff has a somewhat more detailed article on PREVAIL here [nytimes.com].
  • by ucblockhead ( 63650 ) on Friday March 03, 2000 @06:22AM (#1228210) Homepage Journal
    The good news:

    The new chip has over ten billion transisters.

    The bad news:

    The new chip is over 1700 square feet.

    Plans for a portable based on the new chip are being put on hold...
  • by Blenderbrain ( 148460 ) on Friday March 03, 2000 @06:37AM (#1228211)
    Okay, for all of you out there that would like to know a little more about this "new" technique, I will fill you in on some of the details. First of all Electron Beam Lithography (EBL) has many advantages over conventional photon lithography.

    1. Inorder to get the resolutions required in the future, photon lithography would have to go to X-rays that have a high enough brightness (i.e. You need a syncrotron X-ray source on site). For EBL, you just need a large source filament.

    2. Masks for X-rays would need to be the same size as the actual features because there is still not a good method to make images out of X-rays so they use a shadowing technique. Not the case for EBL. Electrons use magnetic lenses to focus and have been used and designed for years. This actually allows you to build the mask in seperate parts and have the electron beam deflection put it all together for you as if it was all together.

    3. Stepper motors don't need to be quite as acurate on positioning. This is because you can put in a simple feedback unit that examines where you are projecting on the surface of the wafer and deflection coils can position the beam exactly. This means that you can do lithography while the wafer is still moving! You couldn't do this in your wildest dream with X-rays.

    4. Electrons have a very small wave-length at the acceleration voltages used (on the order of picometers). However, the real limitation for EBL is not the wavelength by lens abberations (pick up a good optics book) as well as space-charge effects (this happens because you are using a charged particle and they repel each-other giving a bluring effect). Even with all of this, some predict that you could get resolutions "easily" to the 10nm scale in lithography. No, we can't do atom manipulation with this technique.

    5. No this technique does not use a focused beam technique (similar to scanning transmission electron microscopy), but it uses a plane-wave electron beam so that you can expose large areas at once (similar to stardard transmission electron microscopy), allowing for higher through-put.

    Probably the major disadvantage for EBL right now is that we need more sensitive resists. The brightness of the EBL is still low compared to UV photon lithography, but I know of several groups that have come a long way with this one.

    As another side note to this, Lucent Tech. has an EBL system just about at proof of concept called SCAPEL. Hope this clears up a few of the wrong ideas and helps people understand what this is all about.

I have never seen anything fill up a vacuum so fast and still suck. -- Rob Pike, on X.

Working...