Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM

IBM Develops Transistor Capable of 210GHz 128

Homer Simpson writes: "IBM will announce on Monday that it has developed the world's fastest silicon transistor. They claim to have refined their silicon-germanium chip-manufacturing technology to produce transistors that are far thinner than existing ones. This will allow information to travel faster while using a lot less power. The new transistor can operate at 210Ghz (yikes!) using a measly milliamp of electrical current (80% faster than todays technology while using half the power)." Reader Geheimnis points out an announcement on IBM's site about this as well.
This discussion has been archived. No new comments can be posted.

IBM develops worlds fastest transistor

Comments Filter:
  • by Anonymous Coward
    Sorry, but a milliamp is NOT measly. Imagine enough transistors to make a processor (10s of millions), and suddenly you've got in excess of 10,000 amps being drawn. Hoho, you think you've got power grid problems in CA now!
  • by Anonymous Coward
    So a 20 million transistor P4 would draw 20,000 Amps? Or 66 kilowatts at 3.3V!? How much power did the Univac I use? Me thinks this mayhaps is not your general purpose transistor. aj
  • by Anonymous Coward
    The C|Net article is incorrect - hey, they aren't analog designers or semiconductor weenies, so I don't blame them.

    The 210GHz figure is the transistor's FT value, which is the frequency at which gain goes to unity or 0dB. The more meaningful number is the Fmax or 3dB frequency which is the frequency at which gain drops by 50% or -3dB below the maximum gain. The original IBM release lists 100 GHz as the switching speed - ergo Fmax.

    Fmax, and not FT, is the maximum usable operating frequency for digital or analog design purposes. You must have some gain to do useful work!! For a single or dominant pole device, FT = FMax * GainMax.

    SiGe technology is definitely the technology that may actually delivery GaAs-heads have promised GaAs would do for that last 30 years!

  • by Anonymous Coward
    The 200GHz refers to the transition frequency (fT) for a single device, which is the fastest you can switch it without starting to degrade the signal.

    When you integrate a bunch of devices into a useful circuit, you don't come near to fT due to interconnect parasitics, process variation, and the margin that is allowed in order to make the process manufacturable.

    More importantly, you'll need a hell of a refrigerator to cool a CPU made with these - 1mA/device might not be bad if you're building an 80GHz mux/demux, you aren't going to go and build a Power4 RISC with these tomorrow.

  • by Anonymous Coward
    With this I can make my fridge turn off and on real quick!
  • by Eccles ( 932 ) on Monday June 25, 2001 @08:15AM (#129706) Journal
    Imagine a Beowolf[Sic] Cluster of THESE!!! Umm, wouldn't that be called a CPU?
  • From the link:

    "This would permit feature-length movies to be stored as high-resolution digital video on a single compact disk." :)
  • That should keep the chip designers busy for a while playing catch up... :)
  • Imaging a beowulf cluster of these! :-)

  • but don't ya think they might be holding back to do the fancy 300mhz 333, 375, 466 etc etc trick we have seen processors do in the past. What do you think?

    I think that if they try to put out a processor today at 300, 333, 375, or 466 mhz, they'd probably get laughed at a lot. Unless it's some sooper-low power thing for a Palm Pilot or something.

    Seriously, gHz is a lot harder to do than mHz, and there's not an infinite amount of room left for smaller feature size and faster clock speed. Sooner or later it's going to come down to a move to onchip or onboard parallelism. I don't see another 3 orders of magnitude in clock speed being pulled out over the next couple of decades the way it was done over the last two.


    - jon
  • apple really needs altivec (because if you are going to be rendering the entire screen in PDF then having a powerful SIMD vector processing unit becomes really really helpful..)

    Probably not as useful as running at about twice the clock speed, and having 8 CPUs on the module (two per IC, 4 ICs in the module). Plus the PDF render isn't that slow on a (no altivec) G3. I mean it's not stunningly fast or anything, but it isn't painfully slow...

    Does IBM just kind of keep doing its thing with the POWER line

    FYI the POWER4 is a PowerPC, it implements all the PPC opcodes (like all the single precision FP that is optional in the POWER ISA). That doesn't mean the POWER4 does altivec though. IBM hasn't commented on way or the other on AltiVec.

    I expect if the POWER4 has altivec Apple would be insane to not use it in at least their high end "server" level Mac. Even without altivec, it would seem to be a good idea to use it anyway... even if the price is so high few are sold it would still let Apple have one machine that beats many or all Intel CPUs out there. Right now, they could use that.

  • IBM *has* commented on Altivec, and they don't like it

    Let me clarify, they haven't commented on AltiVec for the POWER4. One could assume that because they commented negatively about AltiVec in the past that they wouldn't put it in the new CPU, but that's still not the same as hearing them say it isn't in the new CPU.

    They haven't given out a lot of information on the POWER4, so it isn't surprising they haven't said anything about AltiVec on the POWER4.

    I think you are probably right, there is a pretty big chance there is no AltiVec in the POWER4, but I do think there is a non-trivial chance that it does have AltiVec. After all they have a huge transistor budget, and it would for sure make the market for the POWER4 bigger (the RS/6000, and AS/400 markets are quite small, even compared to the Macintosh market, maybe even compared to the high end Mac market)

    Actually a few more google searches, and I found IBM has licensed AltiVec [macworld.com], no news on whether they are going to put it in the POWER4 though.

    Multiple-core technology is fantastic and i can't imagine why motorola isn't using it yet.

    It requires a huge transistor budget (like double for two cores), and provides less gain then SMT. It is way simpler to design (and I assume debug) then SMT. FYI, the POWER4 may be both multi core and SMT, some MPR reports implied that it was, nothing explicitly stated it though, an nothing later denied it. It will be interesting to find out about both SMT and AltiVec. And real speed numbers.

  • To start selling PowerPC processors without AltiVec now would be pretty confusing (not that they've never confused anybody before -- witness their recent hardware naming conventions).

    Consumers won't really care if a machine is fast because it has AltiVec, or because it is clocked fast, or because it uses a temproal distortion field to speed up time inside the case.

    They just care that the thing is fast. So if a POWER4 Mac ran faster then a G4 Mac (including AltiVec optmised code) they will be happy. If it runs some stuff faster, and other stuff slower, well, I would expect them to be quite unhappy, esp. given how costly the POWER4 is likely to be.

    It might also cause big problems for all those software vendors (possibly Apple included) porting software with the "if it's a G4 it's got AltiVec" rule in mind.

    I doubt it. For one the POWER4 isn't a G4. IBM already makes G4s, so this would have to be a G5 or something. Second the method Motorola recomends is looking at bit 6 of the MSR, if set AltiVec is supported, if clear there is no AltiVec. Apple doesn't seem to promote a method (or it takes more then 5 minutes to find it), however if your application uses bundles it can be set up to runtime link with it's own altivec or non-altivec libs depending on whether the CPU supports AltiVec or not (there are other machine dependent runtime link things it can do).

    I would be shocked if IBM set bit six of the MSR wrong.

  • I expect if the POWER4 has altivec Apple would be insane to not use it in at least their high end "server" level Mac. Even without altivec, it would seem to be a good idea to use it anyway... even if the price is so high few are sold it would still let Apple have one machine that beats many or all Intel CPUs out there. Right now, they could use that.
    But their mantra recently has been that "once more software is optimized for the G4" everything will be much better for everybody. Their definition of optimizing for the G4, in the consumer press at least, has been including AltiVec support. To start selling PowerPC processors without AltiVec now would be pretty confusing (not that they've never confused anybody before -- witness their recent hardware naming conventions). It might also cause big problems for all those software vendors (possibly Apple included) porting software with the "if it's a G4 it's got AltiVec" rule in mind.
    --
  • Let's see... 42 Million transistors in the Pentium 4, for example. 1 milliamp PER transistor... That works out to 42000 amps for a 210 GHz CPU... Piping hot! Did they mean Microamps (that would put you at 42 Amps - high, but possible, especially at a low voltage)?
  • 210 GHz is nice and all, but what does it overclock to? Hell, a bigger fan and a bit of thermal gel and I bet I could get it to 280 GHz!
  • I read it yesterday in the shower. No, I won't explain how I managed to keep the magazine dry -- the solution contains a lot of quantum theory and wax paper and other complicated stuff)

    And vaseline. The solution also contains vaseline. But don't switch hands. Vaseline will dissolve the soybean based ink on the lovely pictures of the big machines.
  • A couple things that are important to note about SiGe when I was talking to some colleagues today...

    1. As far as I know, you can't make CMOS out of this stuff. This process makes BJTs (Bipolar Junction Transistors).

    2. This is primarily for fiber optics, as they say.
  • IBM *has* commented on Altivec, and they don't like it. Here, look. [google.com] The POWER4 does not have Altivec. Apple is not going to use anything that doesn't have Altivec, and IBM is not going to use anything that does. Any IBM-manufactured chips containing Altivecs are going to be made solely because Apple ordered them. Which could happen. IBM has, in the past, manufactured g4s for motorola and manufactured k7s for AMD when motorola and AMD were unable to meet demand.


    I think.


    Multiple-core technology is fantastic and i can't imagine why motorola isn't using it yet.

  • Is the apple/PPC line going to be getting some of this new-IBM-technology goodness?

    I'm really still incredibly confused by what's going on in the wierd little apple-ibm-motorola triumverate that is the PPC platform, but nearest i can gather Apple has been mostly having Motorola manufacture its chips exclusively for some reason, possibly (but probably not) that IBM doesn't like altivec [techweb.com] and apple really needs altivec (because if you are going to be rendering the entire screen in PDF [apple.com] then having a powerful SIMD vector processing unit becomes really really helpful..). And according to some rather shady sources, Motorola has been having horrible problems with manufacturing [appleinsider.com]-- which, if these shady sources are to be believed, can explain [appleinsider.com] why the Mhz levels of the chips Apple has been using have stayed constant for a really long time now, and why there aren't enough 733 Mhz chips around to make dual 733 machines [apple.com] possible. So apple and motorola are just kind of wandering off to the side and getting lost while IBM sits alone in the corner and does really cool things with the POWER4 chips.

    But, umm, this is just my interpretation of things based on the scant material i have read. I wish i knew how accurate i was.

    Umm, but anyway, My question is this: What happens in the little PPC world from here? Does IBM just kind of keep doing its thing with the POWER line and toss Apple/Motorola some patent liscenses from time to time while Apple/Motorola stay alone and try to get their shit together, or are IBM's new metal technologies going to convince apple to start moving toward them? Or.. umm.. i don't even know what i'm saying anymore. OK, just, either way, will we be seeing improvements in apple's PPC line anytime soon, and does this new IBM announcement mean anything to apple customers? Or is this all irrelivant, because this is just one of these things where the technology not ready to move outside the lab, and implementation of this technology in production chips is five years away at best or something?

    Oh dear.. Uhhh.. i'm pretty sure just about everything i've said in this incoherent post has been wrong, but i'm posting it anyway in hopes that someone who is actually informed could step in and explain what is happening. That would be really cool :)

    All i know is, i drool at IBM's chip technologies.. all of them, pretty much.

  • by crow ( 16139 ) on Monday June 25, 2001 @07:03AM (#129721) Homepage Journal
    Note that this is only an 80% improvement. That means current transistors are over 100GHz. So why aren't processors this fast? Simple: The clock rate of a processor is the time it takes to do one step in the pipeline, not the time for a single transistor to operate. By breaking the pipeline into more steps, they can get the number of transistors that have to operate in sequence per step down. Based on the numbers here, it would look like we're at about 100 for a PIII, and less for a PIV.
  • the purpose for this transistor is primarily for embedded devices and cellular phones, not for desktops. So don't get your panties into a bunch over this announcement
  • No. He was infected by Intel. They even branded the "World's Most Common Warning Label" into his head, 'Intel Inside'. The doctors who removed the crayon probably removed the Intel Infection as well.

    --
  • You are talking about the package or you actually pried it open to look at the die?
  • One would think, that Ge transistors are the past. Remember, they only work till 90degC, while silicon can work till 150degC.

    Could they ever produce military SiGe transistor?

    If it consumes half the power, produces half the heat, that is not enough to make faster chips. If the ambient is 30degC, the heat flow between 150degC and 30degC (120degC difference) is twice as big as between 90degC and 30degC (60degC difference).

    However, if they interleave Ga and Si layers so that the doped Ga regions cannot difuse and spread under high temperatures, they have a chance to make faster ICs at the same temperature range as with monolitic Si substrate.

    I doubt they will make it any soon. GaSi will not make faster processors for long, long time.
  • The RISC (reduced intruction set chip)approach to processors is takes the 'KISS' (Keep It Simple Stupid) approach. This means that instead of having to deal with huge amounts of complexity, as in a CISC (complex instruction set chip), time can be better spent improving the processor. Such improvements include speed and technology of the chip.

    Now the Altivec, IMHO, is very much a temporary solution, since by the time processor clockspeed hits 1 GHZ the advantage will be minimum. If it is only a remote possibility that Apple will see the light now, and chuck out Altivec, they would be mad to leave it when the PPC goes 64bit.

    Another thing worth mentioning is that graphics card probably duplicate what Altivec is trying to achieve anyhow.

    Don't get me wrong, I am a content Mac user, though there are certain realities that must be faced. Now if I could only replace my current G4 processor with something from IBM ;-)
  • What about us web developers? All your Recordset's are belong to us! mwaahaaahaaaaha(smack...)
  • Thats just insane. What is holding back processors these days? Are they slowing them down so we buy more on the stepups?

    What is the max of the P4 and Athlon 4s?

    Will mhz always be the speed meeter? What about ops per second and such? We can engineering and breakthroughs rate a processor rather then mhz's.

    Who got started the mhz war? wan't the AMD 386 DX 40 the first mhz machine that sparced it all off? After all with wintel a 386sx16 with 4 megs of ram was the shiznat. Then we came up with DX2, DX4 and whatever else.

    hundreds of gigahertz.. thats just nuts

  • So I get it! How about we reconfigure the solar matrix in parallel for endothermic propulsion!

    We'll do that!

    :-)

  • by Matt2000 ( 29624 ) on Monday June 25, 2001 @07:00AM (#129730) Homepage

    IBM is an awesome hardware/research company. Too often people get down on them for their poor marketing or whatever, but their research is second to none in my eyes.

    I still remember when GMR hard drive technology was announced by them and the press release said "Drives of up to 100 gigabytes would be possible with the technology" and I didn't believe it, just seemed like more vapour and claims.

    Then those drives just started showing up around 8 months later. I hope the same thing happens with this new fab. tech, even if it's only for improved power consuption to begin with.
  • High speed circuits like modern CPUs does indeed use BJTs, but obviously not exclusively. BJTs have the disadvantage of a low input impedance which is what causes higher power consumption. However they offer far higher drive strenghts so they are nice for driving highly loaded nets like pads, buses og clocks.

    Also extreme high speed dynamic logic will often use BJTs on their driver stage.

    However one should make sure not to confuse max frequency of a transistor with the maximum clock rate supported by a technology/design. While related they are certanly not the same (and the lattter depends on a whole additional set of factors like capasative loading threshold voltages, logic depth etc.)
  • It also looks like bullsh*t to me.

    So, exactly, what part set off your BS detectors? (Not that you have any reason to believe this, but I do transistor-level CMOS design for fast reasonably fast circuits -- about 3 Gb/s I/O -- and it's all pretty standard stuff.)
  • So how many f-sub-tau Ghz do curent 1Ghz clock speed processors have? It's difficult to see how much of an improvement this is if you're not an engineer.

    That depends a good bit on how much power you're willing to burn. The transistors in current-generation processors (~1.5 GHz clocks) run with Ft in the 10-20 GHz range. The harder you push the limits, the more stages you need to get the same result so the power goes up from architecture, and you also get some second-order effects from having both P- and N-channel devices on at the same time. IOW, process improvements alone won't help the power all that much except by reducing capacitance and supply levels.

    Keep in mind that the IBM devices are bipolar, not CMOS. They operate with a continuous current draw of 1 mA per device. A current-generation processor has tens of millions of devices, which would add up to thousands of amps. You had better have one HEROIC thermal design to deal with that little problem. That, or only use thise suckers very judiciouly.
  • Seriously, I know we are a bunch of nerds around here, but I think the only people who understand this are the engineers. Did anybody try running this through BabelFish :)

    Oooooohh!

    Somebody, PLEASE?

  • by overshoot ( 39700 ) on Monday June 25, 2001 @07:26AM (#129735)
    At the speeds these little sweeties work, there is no such thing as digital. (Actually, there's no such thing as digital, period, but at low speeds you can squint and pretend.) The 210 GHz number is what's called the Ft (that's f-sub-tau) or unity-gain crossover frequency. At 210, the device takes as much power in as out, so an amplifier chain loses everything above that.

    In practice, you need quite a bit more than unity gain. So you operate the thing down in the 50 GHz region as a front-end amplifier and demultiplexer for OC-768 fiber interfaces, which are currently ruled by indium-phosphide devices. IBM is the only outfit with a SiGe process that plays in this game. The advantage isn't in running the whole bag at outrageous frequencies, it's in running the front-end and back-end at the high rate and being able to put low-power, low-speed CMOS (low-speed=3.125 GHz or so) on the same chip.

    HTH.
  • How cute.

    You seem to ignore that many people that bash MS for bloated code,

    1) are too busy 2) aren't being paid / don't have the finances 3) don't have hundreds/thousands of the 'best' programmers from around the world working in parallel

    Now, if you ignore these facts, your arguement might be somewhat convincing. I'm not sure if you've ever written any code before, but contrary to pop culture belief, a single person can NOT write enough functional, non-bloated code in a year to produce 10 megs worth of binaries. Let alone the 800 megs or whatever the minimal Windows install is nowadays.

    I'm sure there would be a lot of people that would complain a lot less if MS products would do the same thing every time. Even crash in the same manner. But they don't. I'm constantly fixing MS products at work simply because the user's config changed, or this or that file is 'missing'... And it's not like these people have Admin access on their computers, either. (Heaven forbid!)

    -------
    Caimlas

  • Why is Homer Simpson submitting articles about IBM breakthroughs? He works for Intel.

    No, Homer Simpson was the CEO of Compu-Global-Hyper-Mega-Net but his company was bought out by Bill Gates which makes Homer an employee of Microsoft, not Intel.
  • 130.
  • You missed something. The unity-gain bandwidth is based upon a capacitative effect which acts as a low-pass filter, which limits the gain as the frequency through the device increases. When you hit the unity-gain bandwidth, your device will no longer amplify an input signal. Since most digital-logic requires amplification during every logic-element, this is what happens here. The problem is really not that you need the signal to travel between a lot of transistors (there are ways around this if you're clever) but just the simple fact that you usually need a gain of at least 4 or 5 to be able to use the circuit as a logic element.

    The point is that the faster a transistor is rated to go, the smaller these capacitances, and the higher the unity-gain frequency.

  • I just wanted to point out here that they are talking about Bipolar transistors, the kind used in amplifier design. Specifically, these types of chips are designed to target the upper end of the Microwave band, since that band is fairly difficult to transmit in (due to the high clock speeds required). By the sounds of it, they are targeting this at the same market being fed by Gallium-Arsenide circuits now, which are already in this speed bracket, but suffer a few very severe drawbacks (high cost, low production rates, low gain, etc).

    These have nothing to do with the MOSFET transistors used in conventional VLSI CMOS digital circuits, like processors. Although they could make something like the old TTL or the fast ECL with these transistors, I seriously doubt you will see much in this way any time. Both TTL and ECL are FAR faster than CMOS made in the same feature-size, their high power-dissipation made heat-sinking a severe problem for VLSI. Although Bipolar ECL did find some limited use in the LN2-cooled supercomputer market (they have essentially no dynamic power-dissipation, but they have a fairly high static power-dissipation, so they get *QUITE* hot).

    On the other hand, this is quite likely to end up in all sorts of fun communcation and signal-processing applications.

  • Take apart a couple of core routers and you'll find stuff like that.
  • by jmccay ( 70985 ) on Monday June 25, 2001 @07:10AM (#129742) Journal
    I was over at ABCNews.com [abcnews.com] and saw a similar article [go.com]. In this article, "Jeff Welser, manager of high performance semiconductor technology at IBM, says chips using such strained silicon transistors will be 35 percent faster than chips using similar-sized, non-strained transistors." strained transistors is what they are calling the slimmer transisters.
    IBM says they can roll out there by 2003 because new assembly lines wouldn't be need to actuall put the transistors on the chips. Apparently new technology is needed for the underlying silcon-germanium that "stretches" the transistors by forcing them to conform to it's shape.

    The article also talks about Intel having created a a smaller transistor. It's 20-nanometers in size, and that's "500 times narrower than a strand of human hair, or about 30 percent smaller than the current fastest transistors being researched". They say you could fit ipto 1 billion on a chip the size of a P4. According to the article a P4 has 42 million transistors on it. This technology will take longer because INTEL has only been able to make a few of these on a chip. They are estimating 2007.
  • Everyone here is talking about processors and video chips using these new transistors. Unfortunately, according to the article:

    The first chips to use the new technology will likely be networking chips that help guide data on and off of high-speed fiber-optic lines.

    The high speed chips are really needed in networking just to push data out onto the buses, higher bandwidth means fewer bus lines. I don't think you will see this technology until Pentium X or so. If you think about it, Intel wants to sell their 5 Gig chips before they sell their 210 Gig chips, it makes better business sense.

    Now, can someone build a PCB to get the signals to the optical transceivers?
  • You're quite off. I get around 10 years, so 2011.
  • And to think that I bought a 1.4 GHz computer THIS MORNING, and its out of date in 3 hours.

  • by frankie ( 91710 ) on Monday June 25, 2001 @12:15PM (#129746) Journal

    Caveat Lector: I am not a chip designer; this is probably wrong. But if overshoot won't explain himself, someone else ought to try.

    At the speeds these little sweeties work, there is no such thing as digital.

    Transistors don't really send 1s and 0s, they allow current to pass through (or not). As you flip them on and off more quickly, things that used to look like square waves (digital) begin to show their sine wave (analog) roots.

    unity-gain crossover frequency. At 210, the device takes as much power in as out

    If I'm reading this correctly, F_tau seems to indicate how fast you can "overclock" an individual transistor and still get a usable signal out of it.

    In practice, you need quite a bit more than unity gain. So you operate the thing down in the 50 GHz region

    Since real-world chips contain lots of transistors in a row, you need to slow it down enough that you can get a usable signal all the way from one end to the other.

    front-end amplifier and demultiplexer for OC-768 fiber interfaces, which are currently ruled by indium-phosphide devices.

    OC-768 is a honking large optical backbone. It runs a whole lot of frequencies all at once (multiplex). In order to convert it back to something like plain-old-ethernet, you need to split the signal up again.

    Indium-Phosphide is just a different compound to make chips from, like Silicon-Germanium, or Gallium-Arsenide. Apparently InP is the current industry standard for demulitplexers.

    IBM is the only outfit with a SiGe process that plays in this game.

    By now the rest should make more sense. Assuming I didn't totally screw up. Overshoot?

  • As far as I am aware transistors draw significantly more current whilst switching state. Whether it is "on" or "off" is pretty irrelevant.
  • Is the apple/PPC line going to be getting some of this new-IBM-technology goodness?

    Pro'lly not.

    IMHE (in my humble estimation), IBM could easily vault desktop-based PowerPC systems ahead of Intel and AMD within 6 months, if they so chose. They can rebuild it. They have the technology. They can make it stonger...faster... better. (que 70's TV theme music) For some reason, they choose not to.

    Even though they've been at odds with MS and Intel from time to time, IBM still wants to support the Wintel hegemony on the low-end, and is trying to beat them down on the high-end. Why not erode their financial base on the low-end as well? I don't know.

    My guess is that IBM is still smarting from the infamous deal made with Bill Gates two decades ago. The one that made Bill the richest man in the history of the planet and left IBM with a struggling PC business. Because Apple would be the one to benefit the most from IBM's desktop PPC line, and perhaps they don't want to risk being on the losing side of the equation again.

  • I may be remembering this number incorrectly, but I believe in most circuits electricity travels about 2.6 x 10^8 m/s. A little bit less than the speed of light.
  • The transistor is different say..the transistors in a pentium 3. These are bi-polar transistors and unless these guys are wrong, it DOES use 1mA.

  • Great! Now I can stream better quality porn!
  • It was a joke, smartie. Radio Shack was never a reference in late-breaking technology, so I thought it would make for a wise-ass remark (which actually turned out to be lame and plain wrong).
  • So how many f-sub-tau Ghz do curent 1Ghz clock speed processors have? It's difficult to see how much of an improvement this is if you're not an engineer.
  • You forgot "All your Base..." and it's now anoying variations: All your Megahertz? All your transistors? All your framerates? Pre-bitchslaps to them as well.
  • by SilverSun ( 114725 ) on Monday June 25, 2001 @07:04AM (#129755) Homepage
    No,

    the 80% are of course correct. IBM designed heterobipolar transistors capable of up to 90GHz already at the end of 1999

    Cheers, Peter

  • Every year, I hear half a dozen earth shaking announcements -- half of them from IBM -- of major flat screen breakthroughs that should be hitting the shelves in "a couple of years". Yet year after year we go on with LCDs that just get a little better and a little cheaper each year.

    A major (say 2 order of magnitude) change in flat screen benefit/cost ratio would have a huge impact. With the evolutionary changes we have now, we'll get there in a decade or so, but where did all those "revolutionary" improvements go?

  • Well, to be honest, electricity does not travel anywhere near the speed of light. If you look at the actual speed of the individual electrons in the wire, it is actually quite slow. What travels fast is the electric signal. You can think of the situation with a pipe full of water. If the pipe is full, and you start pumping water into one end, water will come out almost instantaniously on the other end.

    So, let's assume that the electric signal goes at 2.6 x 10^8 meters/s (as the other poster mentioned). The amount of time you have given is about 4.76 x 10^-12. Multiply that together, and we get a distance of 1.24 x 10^-3 meters, or 1.24 millimeters. Given that the feature size of these transistors is less that 0.1 microns (micrometers), which is 1 x 10^-7 meters, you can see that the maximum distance it can travel in that short time is over 10000 (10^4) times as great than the distance it has to travel over the transistor. Thus it's quite possible to have a 210GHz transistor.


    Save a life. Eat more cheese
  • That will never happen.. Intel, and AMD would team up and sue M$ out of existance..
  • It's also largely in part to the fact that processors are starting to require program targetting to get maximum performance; the G4's no hell until you start programming altivec, for example. The Pentium 4 is no different; it's no hell until you start programming in SSE2...then watch the fur fly.
  • You forget, oh grasshopper, that you speak of electron drift. The electrons will drift in a wire much more slowly than the electric and magnetic field that we are (incorrectly as EEs do) calling current. Let us not wander into the discussion of electron drift verses actual arrival of elctric voltage potential as indeed that is the way to craploads of math that I will have to spew at you, and therefore insanity. Cheers.
  • Also:
    May 1994: Multilevel Optical Disks [ibm.com]

    No doubt about it, IBM sure knows how to do R&D.
  • From the article:

    The new transistor is capable of operating at 210GHz using just 1 milliamp of electrical current, or about 80 percent faster than current technology while using half as much power.

    Before anyone (probably too late) starts dreaming of a 210 GHz Pentium-class microprocessor, take that 1mA and multiply by 20 million (or some plausible number of transistors for a microprocessor). Even at 1 volt, that'd be 20000 watts... quite a bit more than you can get even on the 240 volt mains of a residential service. ... and it'd take one hell of a heatsink :)

    Now if you wanted to build some nice little amplifiers and use 1 mA bias currents, that's make a lot more sense.

    It's pretty impressive technology, but it helps to keep these things perspective and not equate the bandwidth of individual transistors with relatively large bias currents to the clock frequency of a Pentium-class microprocessor containing many millions of transistors.

  • There are two basic types of transistors: bipolar junction transistors (BJT) and metal-oxide-semimconductor field effect transistors (MOSFET). The method of operation of BJTs and MOSFETs is quite different and both have existed for more than 40 years.

    In general BJTs are faster but consume more power so are used in chips containing a few hundred to several thousand transistors. Check out the current draw per transistor - 1mA. If you had 10 million of these things in your Athlon, you would need a power supply capable of delivering 10kA! This IBM technology is a form of BJT technology so you won't be seeing it in your CPUs.

    CPUs use MOS technology and it will be a while before a MOSFET clocks 210 GHz.
  • So I get it! How about we reconfigure the solar matrix in parallel for endothermic propulsion!

    You would surely need a transpacitor [accpc.com] to do that ?
  • With all that fire power X Emacs will open in under a day... right?
    ---
  • First - What is this going to mean to the current processor and bus speeds that are already seemingly "too fast" at 1.7 GHz?

    Second - Will they do the same with this new discovery as they did with MCA? It's mine and no one else can have any of it.

    Third - What will this do to their stock price tomorrow? (Ah, my sweet portfolio)

  • Now maybe I can actually max out a GeForce3 card
  • One of the main things holding back faster processors has been the heat/energy issue. 1ghz chips today need big fans, and good circulation. If you even got anywhere near to 210ghz using todays methods, it would likely melt a hole right out of your case, and then through your floor. Another issue has been size. A new pentium 4 or amd chip is pretty big. And as they get faster, they have of course been getting bigger and bigger, which just compounds the heat issue even more.
    New developments like this are what is going to take computing power forward. New designs and manufacturing processes have to be developed, and hopefully this is proof that chipmakers understand this.
  • the 286 was made with a different manufacturing process, and is a whole different story. I agree that the chips were bigger then. I was talking about using current fabrication techniques... there is no way to make it faster without making it bigger. The current chips are a large leap from that 286, all I'm saying is that these new chips will be a huge leap as well, because it will allow the chip sizes to get smaller again (so we can keep making them bigger and hotter while making them faster). What I said still stands, if you crank up todays processors to anywhere near the speed ibm claims, it would melt. just as if you cranked the 286 up to about 500 mhz it would surely melt too.
  • Speed good.....
    Sleep Bad.
    Caffiene Good
    Sleep Bad.

    While this is cool, it will be a long long time before we ever see any use out of this speed
  • I'm always a little annoyed when people trot out the "code bloat" firehorse. For one, have you seen the sheer number of things we do with our PCs lately? Don't you think at least a few of those things require a fair amount of low-level code services?
  • well... It will take a while before this appears in products. IBM probabaly had the building blocks for a 1Ghz processor back in 1989. It takes a while before Science and the Marketplace converge to make a product. Sometimes the technology is there, but it's too expensive. Sometimes the technology is there, but there is no product or market. More often than not, the market exists and the technology does not... :-)
  • Ah, a 200Ghz PowerPC 'SiGe'-4. Now if I can just hold off for five or so years...
  • I remember doing characterization on standard cells a couple years ago. Really interesting job, I'd find the propagation times and edge rates of inverters, nand gates, flip flops etc etc... All this was done in software using circuit simulation tools.

    Anyway, right before I left I remember characterizing 0.18 micron stuff. The inverters had edge rates on the order of 50 picoseconds (50 x 10^-12!!). That's how long a logic transition (0 -> 1 or vice versa) takes. Propagation times took a little longer, but not much.

    Of course, this all was back in early '97. This stuff just gets faster and faster. Btw- if someone needs a characterization engineer in the Seattle area let me know. It was pretty cool to be on the cutting edge of this stuff.
  • someone wrote: Why is Homer Simpson submitting articles about IBM breakthroughs? He works for Intel.

    Anonymous coward answers: No, Homer Simpson was the CEO of Compu-Global-Hyper-Mega-Net but his company was bought out by Bill Gates which makes Homer an employee of Microsoft, not Intel.

    Not the way M$ does buisness :)

    "just connect this to..."
    BZZT.

  • Did you miss Alpha's dual cluster structure and P4's cross chip communication pipeline stages? You can pipeline everything... including simple transmission lines.
  • Probably RFSQ, which needs superconductance to work, Id hesitate of putting that on the same level as CMOS transistor technology (which this article wasnt about in the first place of course). HTMT does not stand for "Hyper Technology Management Threading".
  • Actually megahertz isn't really even a good speed indicator NOW. With Intel's Pentium 4, they adjusted the length of their pipeline so that the speed (in megahertz) is far faster than anything AMD offers, but the actual real-world performance is slower than even their original Pentium III at some tasks.

    Some say the cause of this can be traced back to the same problems that originally plagued the Pentium Pro (which is the same core design used for the Pentium II and Pentium III), and that once Intel ramps up production the scores will be different, but I'm not so sure. Needless to say, just because someone can grab a Pentium 4 2.0 GHz doesn't mean it's going to be better than an AMD Athlon running at 1.5 GHz.

    I think the best thing people can do for themselves, at least on a system level, is run benchmarks that test real-world performance, not static tests that throw a hundred thousand MOV instructions at a processor and base their results off that.

    I do think speeds are ramping up though-- Intel forced itself to ramp up I think just so they could reclaim the megahertz speed crown, but that didn't reclaim the actual performance speed crown; it was a purely symbolic gesture in my mind. I figure once Intel hits 2.5 or 3.0 GHz I'll jump on board, hopefully at THAT speed their new core design will be able to outpace an AMD processor of the same generation.

  • Do they still make PowerPC chips. It would be great in a Mac.
  • IBM is great at announcing things you can't expect to see in actual products for 5-10 years. However, there's always some of this stuff finding it into classified electronics much faster. A friend was working, at University of Michigan, back around 1982 on 100MHz processors for the DoD, consider that 4MHz would have been fast for consumers or anything with a single chip CPU at the time.

    --
    All your .sig are belong to us!

  • by ackthpt ( 218170 ) on Monday June 25, 2001 @07:11AM (#129781) Homepage Journal
    Faster transistors, denser chips, lower power, more logic, but to do what? Run WinXP? It would be remarkable to see a headline more like this:

    Microsoft Announces Tighter, Faster Code For New OS Product
    We've removed 50,000,000 lines of unnecessary code, says Gates, boots up much faster, uses less memory and far more stable, thanks to only focusing on needed code. Converting portions of Office from PL/1 and Cobol also noted as helpful...

    --
    All your .sig are belong to us!


  • So you operate the thing down in the 50 GHz region as a front-end amplifier and demultiplexer for OC-768 fiber interfaces, which are currently ruled by indium-phosphide devices.

    Interesting point, but you have completely overlooked the transwarp conduit in the flux capacitor.

    Seriously, I know we are a bunch of nerds around here, but I think the only people who understand this are the engineers. Did anybody try running this through BabelFish :)

  • xemacs opens in about .2 seconds (i.e. i type it and it shows up) on my 233MHz G3. maybe you're not typing xemacs right :)
  • by SVDave ( 231875 ) on Monday June 25, 2001 @06:58AM (#129784)
    Homer Simpson writes: "IBM will announce...
    Why is Homer Simpson submitting articles about IBM breakthroughs? He works for Intel.
  • How far can electricity go in 1/210,000,000,000 of a second?

    Trolls throughout history:

  • I've seen it on star trek, they have super mad faster processors than just 50ghz, so fast you can render real time VR that looks lifelike!

    "Pussy: You spend 9 months trying to get out of it, and the rest of your life trying to get back in..."
  • They *did* rewrite the OS from scratch, that's what NT is. And most code at Microsoft was C/assembler back when they did Word. I love people who bash MS software as 'bloated' but can't produce anything themselves which is nearly as functional. And when they add the features, it is just as 'bloated'. I'm not saying we can't berate MS for slow-ass products that suck, but it's usually because they are supporting a massive user base with diverse functionality needs, not because they 'wrote Word in BASIC' or some other asinine idea.
  • I don't have a P4 handy, but looking at the actual chip on an Athlon, and comparing it to an actual chip on, say, an ancient 286 (my first 'puter, I still have the parts) one is definately larger then the other, and it isn't the Athlon that is larger...

  • don't forget IBM outsource some of their r&d to university's, the hard drive stuff was done at my uni a few years ago, for penny's! Of course they fund a lot of stuff and only some of them make big bucks so there is nothing wrong with the uni's making no money out of it, other than funding pure research of course.
  • Scientific American has this bitchin' article about supercomputing going on right now (i read it yesterday in the shower. No, I won't explain how I managed to keep the magazine dry -- the solution contains a lot of quantum theory and wax paper and other complicated stuff) -- part one this month and part two the next. Besides all the boring facts about trans-petaflop computing, there was a quicky about a transistor that hit 770 GHz...but, it might have been optical (it was on the same page touting optical switching at 10 gb/s for one cable vs hundreds of strand of steel and megawatts of power for the same accomplishment). Either way, that's three times IBM's number. You should check out the article anyway (it's not online, but every slashdotter should have a subscription to Sciam), since it has a lot of information on new technologies in supercomputing, including Hyper Technology Management Threading (a nice way to maximize current silicon to avoid halt cycles on billion dollar computers). It also runs a quick comparison of current megaprocessing techniques, including nods to Beowulf Clusters, Distributed Computing eg SETI and task specialized chips.
  • thank you for cleaning up my mistakes...i didn't pay much attention to the details of the article (must have been shampooing).
  • Nah, Quake will easily eat up all that power. When I can get a 3D helmet with viewscreens that cover my entire visual field at, say, 160 fps, with full, accurate, multisource lighting, mirrors and reflections, and 6000x6000 resolution per eye, then I will be happy.

    At that point, 3D games and porn may merge into one.

  • Well, porn won't be at its best until someone develops an omni-scent production machine so you can inhale the action too.

  • The development that finally breaks Moore's Law? It seems like it could because it is a substantial increase in speed.
  • by Chakat ( 320875 ) on Monday June 25, 2001 @07:10AM (#129812) Homepage
    Thats just insane. What is holding back processors these days? Are they slowing them down so we buy more on the stepups?

    Everything else is holding these bad boys back. Inside the friendly confines of the CPU, the chips are speeding around at a GHz, but communications with the memory are in the hundreds of megahertz, and in a PC, the bus is pegged at 66MHz.

    What is the max of the P4 and Athlon 4s?

    IIRC, they're hoping these processors will get up to 5 GHz

    Will mhz always be the speed meeter? What about ops per second and such? We can engineering and breakthroughs rate a processor rather then mhz's.

    MHz are sexy. They make it an easy sell and a good jumping off point. Things like ops/sec, etc can get tricky because certain ops take longer than others.

    Who got started the mhz war? wan't the AMD 386 DX 40 the first mhz machine that sparced it all off? After all with wintel a 386sx16 with 4 megs of ram was the shiznat. Then we came up with DX2, DX4 and whatever else

    To some extent it was Intel/AMD, to some it was Joe SixChip's lust for speed, to some it was code bloat. Personally, I feel it's time to stop worrying about processor speed for a little while and start worrying about memory speed. That's been an ever-increasing bottleneck, and our processors are starving for bits.

  • I have no idea what happened to the reply I posted to this earlier, but:

    80% improvement over current fastest technology. The technology in current CPUs is not the fastest technology. Different compounds, different die-size.

    There is no way that a p3 has 100 cycles per operation. That would mean that a 1ghz CPU would operate just a little bit faster than a 16mhz CPU, at 10 million operations per second. I think you've misunderstood something here.

  • by return 42 ( 459012 ) on Monday June 25, 2001 @07:28AM (#129820)
    I wonder how much this new transistor technology will speed up the various kinds of RAM we use. Offhand I'd guess that SRAM, being largely logic-based, will keep up, but DRAM, relying as it does on capacitors, may not. If so, the CPU/memory speed imbalance that new technologies like DDR are trying to address (no pun intended) will get a lot worse.
  • Forgive my naivete, but it would seem that IBM's latest SiGe transistors (the switches) pairs perfectly with their copper-wiring chip technology (the interconnects) to produce a very formidable leap forward in the entire chip's functionality. The question, of course, is how IBM plans to make money from these innovations. Will they license the technology to the likes of Intel and Sun, or will they use it themselves 'til the patent runs dry? I don't know; what do you think?
  • There's no mention of operations per second. He's talking about the switching time of each transistor in a pipeline stage. If a module in a 1GHz CPU has 50 transistors in series from one end to the other, then each transistor must be able to switch in a time equivalent to 50GHz. (if it were alone and switching constantly)

    Those 50 transistor delays add up and create a bigger delay, so while they can do 50GHz, the CPU only does 1.

    That's why there's pipelining in the first place, to shorten your critical path through the circuit.
  • by Unknown Bovine Group ( 462144 ) on Monday June 25, 2001 @06:55AM (#129827) Homepage
    Pre Bitchslaps to:

    First post mentioning 'A Beowulf cluster of those things'
    FPS in Quake if Invidia uses IBM's new technology
    Personing mentioning that faster CPU's are useless since everybody is just surfing AOL and writing email anyway.
    First person mentioning anything related to SETI@Home or distributed.net

    Have a m00 day.

Ummm, well, OK. The network's the network, the computer's the computer. Sorry for the confusion. -- Sun Microsystems

Working...