Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
News

Smallest Transistor in the World 51

Ant wrote to tell us of a story on BBC's Web site about the world's smallest transistor. The Vertical Transistor uses the thickness of a precisely-controlled layer of material, rather than light, to set the gate size, which makes for smaller circuits. With many scientists of the opinion that current transistor technology will hit a brick wall of physics soon, the vertical transistor offers a new way to get greater processing power.
This discussion has been archived. No new comments can be posted.

Smallest Transistor in the World

Comments Filter:
  • Isn't the real problem quantum tunneling, though?

    Yes it is. As others have mentioned scaling the transistor to smaller dimensions involves decreasing the thickness of the insulating layer (otherwise capacitance and therefore channel control would go down). At some point, the leakage current due to quantum-mechanical tunneling will become untolerable. What thickness this is is subject to debate but a typical suggested figure is slightly below 2 nm.

    What one can do to remedy the situation is to switch the insulating layer material to something with a higher dielectric constant. Then one doesn't need to decrease the layer thickness. Today silicon dioxide is dominant partly due to its ability to withstand the high processing temperatures in subsequent steps after applying the insulating material. The article mentions this as one of the main advantages of this method: being able to apply the gate insulator and the gate material last.

    The article says that this is after the high-temperature steps have been completed, but that is not entirely true. Applying the gate material (today highly doped polycrystalline silicon) is in itself a high temperature step, and this has to be applied after the gate insulator, obviously.

    A solution would be to switch the gate insulator and the gate material at the same time. There is a lot of research on this (for example, aluminium oxide and aluminium, tantalum oxide + metal, titanium oxide + metal) and hopefully a solution will be available when the SIA roadmap says it is needed for production.

  • Take a look at the National Technology Roadmap for semiconductors:
    http://www.i trs.net/NTRS/RdmpMem.nsf/Lookup/RdmpPDF/$file/grdc hal4.pdf [itrs.net]

    The diagram on page 11.

    The total delay *increases* after a certain point because the delays in the wires dominate.
  • by helge ( 67258 ) on Saturday November 20, 1999 @08:18AM (#1516753) Journal
    There are different kinds of smallness for transistors. One is the size of the active part; for MOS transistors it's the channel under the gate, which is lateral (horizontal). For bipolar transistors, it's the base region between collector and emitter, where the current flows vertically. Thus bipolar transistors are (usually) called vertical.

    Smallness in the active part allows higher speed (usually referred to with the parameters ft or fmax) but it also means that the transistor breaks down at a smaller voltage. That's the reason why modern digital chips are powered by ever decreasing voltage.

    But the active part of the transistor is small compared to the rest of it. It must have contacts to lead the current to and from the transistor. And then the complete transistors only occupy a small area of the chips, the most of it is used for wiring.

    So just because some vertical dimension of a transistor has shrunk to 50 nm doesn't mean that you can fit very many of them on a chip. That depends more on how thin wires you can make, and how many layers. No wonder that modern chips can have as many as five layers of wiring, something that was very difficult to do ten years ago.

    The scientists at Bell Labs has shrunk the active
    region of what appears to be an MOS transistor. It will be fast, but the number of transistors on a chip will not increase as a result.

    I found an article about the transistor at
    http://www.bell-labs.com/news/1999/november/15/1 .html
  • by Anonymous Coward
    Transistors, Unix, and now smaller Transistors!

    I Kiss You!!
  • helge:
    >>That's the reason why modern digital chips are
    >>powered by ever decreasing voltage.

    bperkins:
    >I doubt many CMOS transistors are anywhere near >their breakdown regime. Reduced power
    >consumption and heat disapation is the primary >reason for reducing volatge.

    MOS transistors are often operating near their breakdown regime because doing otherwise would be a waste of speed and power. The reason could be found in the formula for the drain current of the transistor in the saturation region:

    The drain current is proportional to kp*(W/L)*(Vgs-Vt)^2
    where L is the length of the channel (in the direction of the current), W is the width of the channel, Vgs is the gate-to-source voltage and Vt is the threshold voltage of the device.

    The drain current is used to charge capacitors; the capacitive load of both wires and gates of other transistors. Therefore, speed is proportional to this current. The largest Vgs that can be obtained is the supply voltage. Therefore, larger supply voltage means higher speed. It also means higher power dissipation, because those capacitors are charged to a higher voltage. But the speed increase with the square of the supply voltage and the power only increase linearly, so you still gain by using higher supply voltage.

    The drain current can also be increased by lowering L. Speed will increase in reverse proportion. But at the same time, the breakdown voltage between drain and source will be lower. It doesn't mean that the device is damaged, but it's still not something you wish. This breakdown voltage is usually rated a bit above the supply voltage, say 4.5 volts for a 3.3 volt supply.

    The drain current and thus speed can also be increased by making the gate oxide thinner. The parameter kp in the above formula is reverse proportional to the thickness of the gate oxide. The circuit designer cannot do anything about this, but the process designer can. Again, the breakdown voltage will be lower, and the acceptable voltage over the gate oxide is not much larger than the supply voltage. A high quality gate oxide would help, because it would tolerate a higher electrical field. Current through the gate will usually damage the device, therefore it's important to protect them from, e.g., ESD pulses.

    As for the chip density: The gates of ordinary MOS transistors are made with the same photolithographic process as the metal wires, so the evolution in feature size applies to both. For memory chips, you are probably right in that most of the area is populated by transistors. But for analog chips, wiring dominates. Also, a transistor has three terminals, all which must be connected. With a feature size of 0.5 um, the contacts will in reality be more than 1 um wide, even though the gate is only 0.5 um long. The method used by Bell Labs in their report doesn not apply to the contacts, only to the gate.
  • And by then who would those five kings be? Bill Gates, Steve Jobs, Paul Allen, Scott McNealy and Robert F. Young perhaps?
  • Whenever processors are said to hit a wall just around the corner there is a new technology fixing the problem. History just keeps repeating itself. History just keeps repeating itself. History just...

  • If you were to count the number of bacteria inside you that were helping keep you alive at the moment, you'd be there for a while. No man is an island; we all rely to huge extents on other organisms; they just happen to have evolved over billennia instead of being made in a lab.
    If your objection is to the amount of man-made foreign matter in the body, then however many nanoprobes you have inside you they probably won't have the combined mass of a pacemaker.
    I can't state for certain they'll never be affected by a virus, but these probes will be built up atom by atom. Everyone knows this, but few appreciate exactly what that means. They won't have a power failure unless they're stuck in a vacuum. The won't break down, because they'll be build using chemical bonds that are vewy, vewy stable and as unbreakable as they can be. they'll also be highly resistant to bonding with foreign molecules.
  • Moore's law is `slowing down'. It says that computing power will double in 12 to 18 months, and we're coming closer and closer to 18 months... My guess is, unless something really clever happens soon, we will be at 18 1/2 months... But then, try to run some old hand-optimized 8088 assembly on anything greater than a 386, and watch it fly... Games is what's driving stuff forward... And MS Office is coming after :-)

    /* Steinar */
  • by orangesquid ( 79734 ) <orangesquid@nOspaM.yahoo.com> on Saturday November 20, 1999 @02:57AM (#1516763) Homepage Journal
    I built a liquid transistor once... the recipe was the following:
    3 jars
    lemon juice
    water
    salt
    copper wire
    aluminum foil
    I used the copper wire tips and aluminum foil pieces as electrodes. I put saltwater in the center jar and lemon juice in the left and right jars. The emitter was a copper electrode in the right jar. There was a wire running from an aluminum electrode in the right jar to a copper electrode in the middle jar. The base was a copper electrode in the middle jar. There was another copper electrode in the middle jar attached to an aluminum electrode in the left jar. The collector was a copper electrode in the left jar.
    I figured using different electrodes in such substances should create diode-like behavior - especially because copper+aluminum+electrolye=very sucky battery.
    After connecting a voltage supply across the emitter and base (positive on emitter I think), I connected another voltage supply's positive (I think) to emitter and put an ohmmeter between the collector and that supply's negative (methinks). I noted the resistance. I then removed the first voltage supply, and noted the resistance again. Not much different.
    I swapped the polarity of the voltage supplies and repeated the experiment. 70k ohms when the supply was connected, 120k when it wasn't.
    Woohoo! I had a cheap, ineffective giant liquid transistor. Completely impractical ;-)
    The only problem with such liquid transistors (besides them being not very efficient): the liquids tend to pick up fun little things like fungi. I had the three jars (still full) in a box in my basement a little while ago... One day, as I was cleaning up, I looked in the box... ewww...
    Yet again, another nearly completely useless device pioneered by the infamous Matt Williams ;-)
    If anyone repeats the experiment with even a small bit of success (try substituting other metals - it might make it more effective) please e-mail me at orangesquid@hotmail.com - I'd love to hear about it.

    --theorangesquid
  • I actually made on that was smaller than that. Unfortunatly, I put it in a pocket in which things fall out of easily. I think it's either in my couch or somewhere in my car seat... *crunch* oh damn, nevermind.


    If you think you know what the hell is really going on you're probably full of shit.
  • But you frorget Gates law. Every time processor speed doubles, program speed halves.

  • by Baldrson ( 78598 ) on Saturday November 20, 1999 @05:31AM (#1516767) Homepage Journal
    Not being a semiconductor expert, I have to wonder:
    Will these transistors have heat problems?

    Typically, electrically insulating materials are also heat insulating. The vertical geometry of these transistors will result in longer heat flow paths. And to make matters worse, since the heat flow paths roughly correspond to the electrical flow paths, might there be a need for greater voltage for a given clock-rate? That would mean more power going into the transistor, and therefore more heat, as well as more difficulty in getting the heat out.

    Maybe its time for the chip designers to start looking at Seymour Cray's liquid convective cooling based on Florinert [trueamerica.com].

    Cray had a heat dissapation problem driven by a different kind of pathlength that affects all systems as they get smaller. In his systems, geometric optimization was a first priority because the distance between semiconductors was starting to dominate the clock rates. So he had to shrink all three dimensions. Even without increasing the number of transistors, or the power or the heat conduction problems per transistor, he still had a cubed law working against heat disappation. So he started forcing an inert liquid, developed by 3M Company called Flourinert, between the circuit boards to suck the heat out. It turns out Flourinert has an exceptional heat conductivity times electrical insulation product.

    An interesting side-light: Flourinert was developed to be a blood substitute. Perhaps the semiconductor systems are acquiring a circulatory system.

  • Just when you think Moore's Law is about to reach the wall, something happens. You'd think that by now we'd know better.
    This is also a rather cool discovery. It's overcome the problem with light etching and electron leaking in one go; that's impressive.
  • The best has yet to come because with current technologies and knowledge, we can reach 1.6 GHz. But after that, the chip would melt anyway due to high temperature. 0.1 micron is the limit. Then we could multiply the numbers of processors but it's just a short term solution.

    I've heard that HAL computer systems [hal.com] (working for Fujitsu) is working a prediction method to find data before it is calculated to increase speed. A little like the machine O (the Oracle) of Alan Turing.

    There is also the MAJC computer [sun.com] of Sun that could be used to build a neuron network (like a machine B) to give more "smart" data processing.

    Finally there are also the Quantum Computers [qubit.org] that could change everything but that's almost science-fiction

    All this information has been mainly taken from an article in LOGIN:

  • my interpretation is that altough they can make small transistors, it is not scalable to large scale integration. They make the comment of being similar technique to painting with a big brush then peeling up the paint to get a thin line.
    This sounds like a flawed way of doing bulk transistors.
    What did I miss, how are they dealing with that?
    I guess what we will end up with is 50nm transistors interfacing to molecular memory arrays self assemled in between nano columns of organic LED's of our flexable full color displays. Need more memory or processing upgrade, just add another display panel to your wall. Every square inch and extra 500 terabytes and treaflopps. With the distributed RF networking tha will be in by the next couple of years, it will be unthinkable to need more.
    Mores law will finally become redundant because the machines will become more powerfull than we can use. Will humanity make good masters or good slaves?

    cya, Andrew...

  • Ummm, the MAJC processor by Sun, Merced, and the not-here-yet chip from Transmeta all use predictive(sp?) logic to speed up execution, if that's what you're referring to. Been there, done that....
    --
  • As far as I know, current P6 and K7 cores do this as well... You need branch prediction for pipelining (sp?) to work. Otherwise the pipeline would be flushed whenever there was a conditional jump in the code.

    -

  • An interesting side-light: Flourinert was developed to be a blood substitute. Perhaps the semiconductor systems are acquiring a circulatory system.


    Or this is the first step in the evolution of The Borg. :)

    -Restil
  • ..to do faster and better components. I don't really think we will ever hit a brick wall as everyone says. There might be a small problem, but, problems are there to be solved... Right?
  • by Anonymous Coward
    Yet another journalist not researching his story thouroughly. The article seems to describe a field-effect transistor, then the last paragraph says "The transistor was invented by three scientists at the Bell Laboratories in 1947."

    Uh, the BIPOLAR transistor was invented at Bell Labs in 1947. Field-effect transistors were invented about 20 years earlier.
  • Smaller transistor sizes don't necessarily make the chip run faster. The problem is, as the transistors get smaller, the wires get smaller as well. Smaller wires have higher resistance and have more capacitive contact with surrounding wires, and so at a certain point, the delays in the wires start to dominate and the delays in the chip actually increase as the chip is minaturized.

    This is a big problem in the industry today. Copper wires help because they have lower resistance than Al interconnects. People are also researching optical interconnects.
  • Does this require significant modifications to existing fab lines, or worse, entirely new ones?

    My understanding of fabs is that they're generally completely replaced every few years, anyway, as process size shrinks and other parts of the manufacturing process improve.

    Besides, it's not like you're going to see Athlons using this technology before Christmas. It's going to be many years (if ever) before this is actually feasible for mass production, and I'm sure the manufacturers will have plenty of time to build new fabs.

    /peter

  • ..so when does Lucent get a logo beside the articles, rather than "News" or "Hardware"?
  • by Anonymous Coward
    That the first field-effect trnasistors used exactly this 'vertical gate' design (by aluminizing a piece of glass to make the gate, as I recall)
  • This isn't quite true. For the most part, smaller transistors are always faster.

    Resisitance goes up for smaller lines.
    R~L/WH
    but capcitance goes down
    C~LW/D
    since the wires are shorter(L) and thinner(W), although they are closer together(D).

    The real problem is that in order to get an increase in performance, the industry has relied on making bigger chips *AND* smaller transistors.


    If you make your lines longer and smaller, then you run into trouble. R goes up and C goes up and your in trouble.




  • ...about any of these new discoveries until I see some information on production costs.

    The smallest, fastest, lightest, smartest and overall best (insert thing here) doesn't do jack for the industry and end-consumer if we can't mass produce it low, low costs. Sure, these scientists in their labs can do wonders, but can that process be scaled up and automated?

    J.J.
  • I've seen phototransistor arrays (think of them as an alternative technology to CCDs) made of vertical bipolar transistors being used in the late 1980s.

    What's interesting to me is their use as general purpose transistors in dense arrays.

    I seem to recall that at least many years ago (call it eight) that one of the problems with implementing structures like that on existing fabrication lines was that those lines didn't control the thickness of the gate oxide (or was it something else?) sufficiently for the (analog) tasks my coworkers were using it for. That's probably less critcial in digital applications, but I wonder: Does this require significant modifications to existing fab lines, or worse, entirely new ones?

  • He's right, you know. At the current rate even nanotech will run out of options. Shall we move to using quarks and quantum physics to keep up with Moore's Law? I can just see it now - you're playing a cool-ass game of quake, and all of the sudden *bzzzt*. Oh crap, my quantum computer just "tunnelled" to another random location in the universe! Well... atleast Intel would like a computer that did that - brings new meaning to the words "forced obsolence" doesn't it? =)
    --
  • I'm most likely wrong in which case correct me, but isn't capacitence proportional to the areas of the two "plates" and their distance apart? Doesnt' that mean that if the scale is halved, the surface areas are quartered and the distance is halved, so the capacitence goes up by a factor of two for the seperation and down by a factor of four from the distance making the real capacitence go down liniarly with respect to the scale.

    The resistance would (if i'm not forgetting :-) then go up by a factor of four from being proportional to cross sectional area.

    Isn't the real problem quantum tunneling, though?

  • >That's the reason why modern digital chips are> >powered by ever decreasing voltage.

    I doubt many CMOS transistors are anywhere near their breakdown regime. Reduced power consumption and heat disapation is the primary reason for reducing volatge.

    > And then the complete transistors only occupy a > small area of the chips, the most of it is used > for wiring.

    Wiring is done in the oxide layer above the substrate. AFAIK, most of the substrate is populated with transistors. Otherwise, you increase your line length uncessisarily, which is a disaster for gate delays.

    >It will be fast, but the number of transistors on >a chip will not increase as a result.

    I would disagree with this. Since the size of the active region can be smaller than your photolithography line width, the transistor can be made quite a bit smaller, since you can use your smallest photolith line width on a relatively larger feature.
  • So, after reading the article, I discovered that this vertical transistor is a FET (field effect trans.). At my uni by "transistor" we implicitly meant bipolar trasistors (which don't use electric field). And, I remember this technology been discussed at my Microelectronics classes. It's not new idea, they just perfected the planar technology to reduce the size of the layers.




  • Yeah, I'll believe we're breaking the atomic barrier to increase processor speeds when I see Gillette come out with a new shaving cream named "Quantum Foam". ;-)

  • .. to build gates. I mean, they're very small, right?
  • This is pure SWAG (scientific wild assed guessing), but decreasing capacitence would be a good thing. As capacitance decreases, the time to charge that capacitor would decrease also. For memory circuits this would decrease the time to write and read bits. I believe in CPU's there are problems with stray capacitance, which if it were reduced, seems like it would allow the chip to be operated at a higher frequency.
  • Smaller transistor sizes don't necessarily make the chip run faster. The problem is, as the transistors get smaller, the wires get smaller as well. Smaller wires have higher resistance and have more capacitive contact with surrounding wires, and so at a certain point, the delays in the wires start to dominate and the delays in the chip actually increase as the chip is minaturized.

    It doesn't quite work that way.

    These new transistors have much smaller gate thickness, which means less capacitance, which means higher switching speeds, but also a lower breakdown voltage, which means less power. Making the transistors smaller also allows them to be crowded together, which reduces wire length (not necessarily wire thickness, as you think). So the net effect is positive: everything gets smaller and faster and uses less power.

    Also these new transistors can be layered one over the other, which is practically impossible with today's horizontal transistors. This also reduces average interconnect lengths. And contrary to what some posters have been surmising, the reduced power requirements balance the greater heat dissipation, so no new cooling techniques are needed.

  • A more informative link is here [bell-labs.com].
  • Moores law for individual processors may someday reach a limit imposed by harsh, physical reality, but scalability will not reach such a limit for a really long time after that. We'l just go back to having computers as big as rooms, and only the five richest kings of europe will be able to own them. Impressive

  • by rde ( 17364 )
    At the current rate even nanotech will run out of options.
    If this new transistor can be scaled (and the chip boys 'n' girls have about ten years before it's vital), then it should push Moore's law's validity into the 2020s or 2030s. By then there's bound to be some manner of nanotech that'll improve matters; that or optical computing, or quantum computing, or...

    If we truly do achieve a nano solution, then no-one'll really care about Moore's law. If you can use assemblers to build the chips, a much greater precision at existing scales will be achievable, so it'll be less vital to get smaller and fuzzier.
    Of course, once true nanotech is online, we'll be all less worried about the speed of quake XI and more about adjusting to living for a couple of thousand years. I hope.
  • These new transistors have much smaller gate thickness, which means less capacitance, which means higher switching speeds, but also a lower breakdown voltage, which means less power. Making the transistors smaller also allows them to be crowded together, which reduces wire length (not necessarily wire thickness, as you think). So the net effect is positive: everything gets smaller and faster and uses less power.

    Wire thickness has to be reduced to match the transistor sizes, otherwise you don't get any gains in density. Crowding the wires together also increases the capacitance between wires.

    Shorter wires don't cause much problems, but longer wires have large resistances and delays (and longer wires will still exist since the purpose is to squeeze more transistors into the same area).
  • Of course, once true nanotech is online, we'll be all less worried about the speed of quake XI and more about adjusting to living for a couple of thousand years. I hope.



    But then, it won't be _us_ living, it'll be the nano-tech. By that time, for us to be able to live that long, it'll be because, if all you nano-tech yappers are right, we'll have so darned many nano-thingies inside of us, healing us, taking care of us, the list of claimed accomplishments goes on. Therefore, it is not the humans who have triumphed in prolonging our lives, it is the humans who have triumphed in prolonging the operation of robots, decreasing their size, and prolonging the existence of our flesh. that's not living folks, not in my mind. (I realize we all don't think like I do, so.. moderate this down, if you wish.) But, I feel that, by having all of those nano-bots inside of us, we're just making ourselves weaker, more dependent, and more vulnerable. What if we get a virus in our nano-bots? What then? eh? And that's not just it.. what if they have a power failure, a BSOD, a kernel panic, dump a core?


    just my little piece of grey matter

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...