Forgot your password?

IBM Creates World's Fastest Semiconductor Circuits 251

Posted by timothy
from the it's-only-a-model-shhh dept.
Todd Heidesch writes: "'IBM announced it has created the world's fastest semiconductor circuit, operating at speeds of over 110 GigaHertz (GHz) and processing an electrical signal in 4.3 trillionths of a second.' IBM expects the new technology to be pumping out 100 gigabit/sec network switching chips by the end of the year (on an optimistic schedule, I presume)." dr_zeus contributes a link to this Reuters article running on Wired (also fairly thin) on the release, writing: "Granted, this isn't a PC chip, but one wonders how long it will be before we hear 'dude, you've got a 110GHz Dell!'"
This discussion has been archived. No new comments can be posted.

IBM Creates World's Fastest Semiconductor Circuits

Comments Filter:
  • 100 GHz computing should hit in about 10 years.

    • Is this based on any reasonable estimate (Murphy's Law, etc.?), or is it just your own wild guess?
      • Murphy's Law is:

        "If anything can go wrong, it will"

        I don't think it applies here.

        (No, wait a sec, I think it does...)
      • Someone's already said something similar here [], but I know that people just sit and wait on their own comments being replied to sometimes, so I'll say it. Moore's law is the one that talks about the speed of advances in computing power. You can read all about it on Moore's [] web page. If we were going by Moore's law (assuming that the speed of a processor can increase uniformly with the amount of transistors we can fit onto it) it would be 5 or 6 years until we hit 100GHz. Unfortunately it will be more difficult to get the required amount of transistors for a processor running at 100GHz that it is to make a NIC run at that speed. Also since Moore talks about the amount of transistors that can be used, who says we're not going to find a way to make it faster with less transistors before 10 years has passed? it's all just speculation.
    • by Anonymous Coward
      let's see. 2.5*(2^(x/1.5))==100
      => x = log2(40)*1.5 = 7,98 years. pretty close.
    • Hopefully Steven [] won't be around for the occasion.
    • Dogbert was hired as a consultant to name the company's brand new product. He said that he had a computer combine the best words from astronomy and technology. The result? "Uranus-Hertz." It was banned from at least one newspaper.
    • at the current rate its been going at. itll be 128ghz in 9 years
    • taking Moore's law 100GHz computing should be mainstream in 2007, i.e. 5 years from now.

      we might bump into singularity before that time though.

  • by ralian (127441)
    Dude, no Dell - I want a Beowulf cluster of those!! :)
  • Combine this technology with the recent advent of the broadband laser, and we will be seeing some fast networking, indeed.

    My partner, Sean, worked at Cisco for a while, before the economic implosion, and heard some things about 100Gbit networking projects in the works. It'll be really sweet to see this hit the market in a couple of years.

  • by Zo0ok (209803) on Monday February 25, 2002 @04:53PM (#3066826) Homepage
    Dude, your 110GHz Dell consumes 450kW, and requires its own diesel generator...
  • by Anonymous Coward on Monday February 25, 2002 @04:54PM (#3066830)
    The cover has 3 desktop machines 'burning rubber' and racing towards a finish line. The title is something like "Breaking the speed barrier, Intel 386 33MHz!"

    It's a neverending journey, this technology trap we find ourselves in.
  • I swear if I see that Dell commercial with that dipstick kid in it again I'm going to throw my shoe thru the tube!!!

    As to the super-fast network speeds, that's great, but will it ever make TW's RR service quit letting rooted Win2crap boxes probe my ports 24/7?

  • dude, you've got a 110GHz Dell!

    Sure, but what with Dell's "we'll only sell Intel chips" license agreements, it'll probably be running a Pentium 7 with a 1000-instruction pipeline and "predictive stalling," it'll cost $10,000 just for the processor, and it'll be slower than my Duron 750. :-)

    Windows 2000/XP stable? safe? secure? 5 lines of simple C code say otherwise! []
  • by mikeplokta (223052) on Monday February 25, 2002 @04:54PM (#3066840)
    At 110GHz, light travels less than 3mm in one clock cycle -- less than the width of the processor, I presume. And if it's accessing memory from a RAM chip 10cm away, it'll be waiting close to a hundred clock cycles to get anything back.
    • by Anonymous Coward
      At 110GHz, light travels less than 3mm in one clock cycle -- less than the width of the processor, I presume. And if it's accessing memory from a RAM chip 10cm away, it'll be waiting close to a hundred clock cycles to get anything back.
      That's okay - the CPU justs plays Solitaire until the RAM gets back to it. (A little eensy weensy microscopic solitaire game.)
    • by taniwha (70410) on Monday February 25, 2002 @05:01PM (#3066893) Homepage Journal
      actually on cu/si waveguides (ie normal wires on a die) it's way slower than that.

      Even at today's high-end speeds (2GHz) 100 cycles (50nS) is fast for dram access. This is why keeping fast chips stoked these days requires heavy caching (L1/2/even 3 on-chip is a must and heading for 50% plus of die area)

      • by SuiteSisterMary (123932) <> on Monday February 25, 2002 @05:08PM (#3066947) Journal
        Almost makes you wonder if we'll move away from the 'big CPU, big whack of RAM' model to the 'bunch of little bitty CPUs, each with their own whack of RAM, and they do their own thing' model.
        • Almost makes you wonder if we'll move away from the 'big CPU, big whack of RAM' model to the 'bunch of little bitty CPUs, each with their own whack of RAM, and they do their own thing' model.

          Yeah, I've been wondering how long it'll take before increase in frequency becomes so difficult that people finally realise that fine grained parallelism is the only way to go. The vast majority of time consuming tasks could be made parallel. It's not as if parallel algorithms are a black art or anything - there's a lot of material on the subject available.

          It's a shame dual processor systems cost so much more - otherwise people like me would grab one to try out ideas and write some parallel code. But because there's hardly any parallel code out there, nobody buys dual processor systems, and so no parallel code gets written :)

          Maybe I'll save up some more...

          • Why do you need 2 procs to do parallel programming? That's what threads and preemptive multitasking are for. I know it's not 'true' multithreaded. Since only one thread is being executed at once, but it fairly close.

            BTW, try programming a compiler that can take your code and make it run in parallel (procs/threads/whatever) it's really hard to do. All programs share data, the level of which determines their parallelness.

            Try out the fourth engine that was mentioned awhile ago, it supposed to be blisteringly fast, but requires you to write a bunch of parallel programs for it to work.
          • It's not as if parallel algorithms are a black art or anything - there's a lot of material on the subject available.

            True. And very simple patterns like Worker Pool / Job patterns make it quite accessible. It's just an issue of exposure. As soon as on-die multiple cpu machines are mainstream, multithreaded programming soon follow.

        • Processor fabrics (Score:3, Informative)

          by ka9dgx (72702)
          I had this idea back in 1982 when I was in college, and keep waiting for someone to actually do it. If you could have a 1024x1024 array of 1 bit processors (state machines, actually), you could pipe data through at the clock rate of the chip, which back then I thought could be 10 Mhz, using CMOS.

          I'd still like to have even that modest potential, which would allow MAC (Multiply ACcumulate) operations at 10MSPS, for digital radio projects, etc. If you decided you need a different feature, just reprogram the fabric.

          With today's technology, I don't see why you couldn't have a 4096x4096 grid with 4 way interconnects, running with at least a 1 GHz clock. This could do real time FFT, etc, straight from RF to anything. You could implement a crossbar switch in software for at least 32 streams (being conservative) at the clock rate, in software, with plenty of capacity to spare.

          Processor fabric is a powerful concept, but Intel will never implement it, it's too much of a threat to them and their Von Neuman architecture. Someone else has to do it.


          • 1024x1024 array of 1 bit processors

            That's what Thinking Machines did in the 1980s, roughly. They eventually moved away from bit-serial processors to more conventional bit-parallel processors.

            The main reason why highly parallel machines have never gotten really popular is that, even aside from cost, they need special programming by humans. Parallel programming is a black art compared with serial programming. Compilers can't parallelize C worth a damn.

    • a hundred? more like tens of thousands. For a variety of reasons ( checking caches, signal propagation for electricity is less then light , signaling time ) Already processors can wait hundreds of clocks for memory access.
    • While I can't say what the actual physical limits will be on a 110 GHz electron based chip, I do know that calculations such as this are flawed. While the maximum speed of an electron may be the speed of light, the maximum speed of an electron through a circuit in a single direction is nowhere near that fast. Because of the voltage difference applied electrons have slight preference for one direction of travel, however 99% of their motion is still completely random. Electrons never shoot down a circuit in one direction at the speed of light.

    • Maybe with speeds like this, they could bring back a concept from the 1950's: the 1-bit serial computer. IIRC, these were popular for scientific computing because the there was no native word size, and the numbers could be as large or small as needed.

      It seems like you could put together a CPU with performance rivaling current high-end chips using a tiny fraction of today's transistor count if all data paths are only 1 bit wide. The die size could be miniscule.

  • by Steveftoth (78419) on Monday February 25, 2002 @04:55PM (#3066842) Homepage
    is in their ability to save power. From what IBM is saying, is that their chips can be run at say only 20 - 40 ghz and consume a hundred times less power then a chip built with todays processes. So you'll be able to get the same or more processing power out of these chips for less enegry.
    At 110 ghz, a PHOTON only moves 2.7mm so figure that the actual signal propagation is like 2/3 the speed of that and you see that the signal can only travel 1.8mm in a clock. So, these chips are not going to be all that great for CPUs at 110 Ghz. Much better for signal processing likein routers or something.
  • by essiescreet (553257) on Monday February 25, 2002 @04:55PM (#3066843)
    Now I can get rid of my pot-bellied stove and start using my PC, lower emissions, more heat, and a space saver!
  • 4.3 x 10-12 sec (Score:4, Informative)

    by crumbz (41803) <(<remove_spam>ju ... spam>> on Monday February 25, 2002 @04:55PM (#3066845) Homepage
    That means ~1.29mm at C (speed of light), so about 0.9mm in reality. Wow, those better be some short circuit traces!
  • by Reality Master 101 (179095) <.RealityMaster101. .at.> on Monday February 25, 2002 @04:55PM (#3066848) Homepage Journal

    And Steve Jobs will still claim that his 2 Ghz G6 is "twice as fast" on some obscure benchmark.

    • Actually Steve Jobs will say forget what we have been telling you about the Hz Myth because the Myth is actually true.

      Steve Jobs will show off a 110GHz G6 and say that.
    • Well probably not a 2 GH Power Mac but Probably a 50Ghz Power Mac. The Power PC is generally 1/2 the clock speed of the Intell chip. But remember MHZ are only one componet to speed. And it depends on the Application and how it uses the processor/memory. If there is a program that uses a ton of Harddrive usuage. You can have a 110 Thz computer and it will run just as fast on a 2Ghz becuase it is just waiting for the Harddrive.
    • by sharkey (16670) on Monday February 25, 2002 @05:11PM (#3066962)
      ...some obscure benchmark

      Probably the number of Bunny People ignited per second.
    • Infinitely faster.

      With Jobs' 2GHz G6, Aqua in OS X can render the minimize/maximize window genie effect in .0006 seconds, whereas it takes the 110Ghz PC running Windows an indefinite amount of time to do it.*

      Fine print: Windows lacks the genie effect, proving the PPC's superiority over the Pentium.
  • Now maybe I'm completely wrong here - if I am please correct me - but I got the impression that the higher the mega/gigahertz that your processor is running, the more power it needs. Would a 110 gigahertz computer send my electric bills sky high, or would this be a trivial concern?
  • How Long to Market (Score:2, Insightful)

    by ackthpt (218170)
    "Granted, this isn't a PC chip, but one wonders how long it will be before we hear 'dude, you've got a 110GHz Dell!'

    What's the standard IBM response? 10 years to market, IIRC. Taken the time to fully develop the technology to manufacture more than one transistor in a lab, and distribute it as part of a chip.

  • Call me stupid, but why can't they use the same material in PCs to increase the chip speed? Are there some limitations/incompatibilities other than the comparitively slow speeds of memory and I/O (I guess we can all see why I never got very far in that EE major...)
    • First of all I suspect that this technology is simply too expensive for consumer chips. Even if it could be done cheap, I think they would need completely new fabrication facilities to make those chips, because the technology is based on a different compound. Fabrication facilities are not cheap and companies like to use the current ones enough to make them profitable before jumping ionto new ones. I also suspect that these chips might need a lot of power. That may make them unusable for home computers.
    • Re:Stupid question (Score:3, Informative)

      by Cougar1 (256626)
      Call me stupid, but why can't they use the same material in PCs to increase the chip speed? Are there some limitations/incompatibilities other than the comparitively slow speeds of memory and I/O (I guess we can all see why I never got very far in that EE major...)

      First of all, the IBM transistors are not MOSFETs, the tiny switches used in CPU's and other logic-based circuitry. They are instead heterojunction bipolar transistors (HBTs). HBTs are lightning fast and can be used as low-noise amplifiers for high frequency signals, which makes them great for wireless and Gigabit optical communication applications, but they are relatively large compared to MOSFETs and so are not really suitable for making CPU's. (Notice that the IBM press release never mentions CPU applications, but instead focuses on 100 Gigabit optical communications networks).

      Now, you may wonder why SiGe can't be used to make super-fast MOSFETs. The main problem is that MOSFETs require a dielectric, such as SiO2 to act as an insulating layer between the "gate" and the channel. However, attempting to grow a layer of SiO2 on SiGe results in separation of the Ge from the Si, ultimately causing device failure. Currently, people are trying to find ways to deposit new dielectrics with higher dielectric constants, such as ZrO2, to replace the SiO2. Once this is acheived it may be possible to put such a material onto SiGe to allow creation of a MOSFET using this technology. However, development of such high-k dielectric technology is probably 3-4 years away and adaptation of this to SiGe will be a few more years beyond that, so don't expect SiGe-based CPU's anytime soon.

      One last thing. I don't understand why IBM gets all the press. Motorola announced 110 GHz HBTs [] last October. IBM is really not as far ahead of the curve as they would like you to believe.
      • IBM gets the press because they have the massive funds to advertise. And yes, getting impressively worded information to journalists is advertising. That's mostly a guess, but I suspect it's closer to the truth than we'd like to admit.
  • by DeadBugs (546475)
    Gigahertz don't matter!
    Look for AMD 110000+ XP Processors
  • by Edmund Blackadder (559735) on Monday February 25, 2002 @05:02PM (#3066903)

    When in engineering school (a couple of years ago) my professor declared that we are moving towards the end of the speed and size improvements of microchips, because soon the assumptions aboout newtonian physics, on which circuit design is based on, will stop being reliable.

    Usually you dont have to worry about quantumn effects (electrons tunneling and such things), because there are enough electrons to statisticaly average out the quantumn effects into the classical model.

    But when you increase frequency you usually have to decrease the size of the components (so transistors switch faster). But if you decrease size too much you will not have enough electrons passing trough your circuit, to ensure the signal follows classical laws.

    Well I guess the quantumn barrier was a lot further than i thought it was.

    Or maybe IBM are not decreasing the size of their transistors but increasing voltages to make circuits switch faster.
    • >When in engineering school (a couple of years ago) my professor declared that we are moving towards the end of the speed and size improvements of microchips, because soon the assumptions aboout newtonian physics, on which circuit design is based on, will stop being reliable.

      And they've been saying that for over ten years.... and so far, it just hasn't happened.

      >Well I guess the quantumn barrier was a lot further than i thought it was.

      That's the problem with those pundits - when they make those statements, they assume that no more technological advancements will be found. And even if that were right, there's still a lot of the current CPU-manufacturing process that can be tweaked and milked.

      Look at some of the recent technological findings - like copper interconnects and SOI. It took a couple of years before they even began to see introductory usage, and SOI is still far from being mainstream. And then again, a lot of chips are still being made on the 0.17 micron process. And to top it off, 0.10 and even 0.07-micron processes are in the works. Even without any new technological discoveries, the move to 0.07 micron SOI chips has the potential to last us through several more 18-month generations!

      So what about other technologies? There's another manufacturing trick that's being refined right now that allows the crevisces between transisters to be made deeper than they are wide, which will allow us to pack even more transistors on a chip. And why stop with aluminum interconnects? Find a way to use silver. And there was a recent announcement about using stressed lattices to get even faster propagation. There are a lot of developments in the works. Yes, eventually we will hit a quantum limit - but I'm confident that it won't happen any time soon.

    • Fact is, the limit du jour has been overcome repeatedly in the past. Years ago, diffraction was perceived as a fundamental limit. We were never going to see sub micron technology because the optics couldn't image without diffracting. The industry has not only passed sub-micron, it's now looking to go below .1 micron.

      Your prof is in good company when attempting to forecast the future...Rutherford didn't think anything would ever come of atomic energy.

  • I don't know what you're smoking, but i want the Dual 110 GHz board overclocked to Dual 150 GHz =)
  • Which I believe states that transister count doubles every 18 months, and I have noticed that MHz count on Intel CPUs tend to follow the same line, we should be ready for this speed CPU(Given Intel's trend) in our desktops in another 8.25 years, better known as Q3 2010.
  • can't be far behind?
  • SiGe-Bipolar-CML (Score:2, Informative)

    by RichMan (8097)
    A SiGe process is an fudge to a bipolar technology process which is an addition to a more standard digital process. This means the devices are not your standard digital logic FET devices. The devices are most likely NPN vertical bipolar junction transistors, with the SiGe implant. The logic gates would then be standard complementary logic (CML) structures. Technology Description []
  • Wires (Score:5, Informative)

    by vlad_petric (94134) on Monday February 25, 2002 @05:09PM (#3066952) Homepage
    Well, don't expect a Pentium 110GHz yet ... The problem with microprocessor design is more and more the time it takes the signal to propagate through wires than the time to propagate through gates.

    Did you know that P4 has a couple of pipeline stages that do nothing but propagate signal? (yes, they pipelined the wire ...)

    The Raven

    • yup - the basic problem is very simple - propagation is proportional to RC (the resistance times the capacitance) - you have to charge up the capacitance of the wire (wrt ground and other wires around) as well as the target gate(s) before you can measure the signal at the other end.

      That's why copper wires were important - they reduced R. C on the other hand is a different matter - for years and years (untill about 3-4 years ago) no-one cared about the capacitance of wires - because they were usually small compared with the capacitance of gates and the ratios tended to scale down as device features scaled down - everything got faster together ... then as wires started to get really thin something called the 'edge effect' started to kick in - basicly the wire is a flat plate and the capacitance is proportional to it's area (for fixed width wires that also means proportional to it's length) plus the edge effect which is proportional to it's perimiter. The edge effect was always there but small, it changes roughly linearly when a chip is scaled while area changes with the square of the area - the area component has been getting smaller a lot faster than the edge-effect one which now often dominates.

      To make matters worse many of our CAD tools have untill quite recently made statistical guesses about wire capacitance which worked OK during things like synthesis (compiling to gates) when the wire capacitance was a small part of the equation, now it does matter and means the the whole structure of synthesis tools will have to change to perform combined synthesis and layout operations in order to create optimal circuits

  • whooooooooooooOOOOOOOOOOOOooooooooshhh....
  • ...Because you won't be using clocks by then. At 110 GHz the size of the chip die becomes a significant factor.

    More likely, you'll see it used in ansynchronous computing --and that will take some time.

  • by MobyDisk (75490) on Monday February 25, 2002 @05:20PM (#3067023) Homepage
    The article does not clarify what is exactly running at 110GHz - it says a "circuit". Is it a single transistor? Or a series of transistors? Does that include wiring? It is a common misconception that a 110GHz transistor produces a 110GHz chip. A 110GHz transistor would likely produce a 1GHz chip.
    • by dhovis (303725) on Monday February 25, 2002 @05:44PM (#3067202)
      It is more than a transistor. The article at the NYTimes (I'm too lazy to link right now), said that IBM had previously anounced a transistor which could switch at 260 GHz and this anouncement is simply the next step, an entire circuit, but probably not a whole CPU.
      • It said the circuit was a ring counter, which is basically the fastest circuit you can build that actually does something. It's almost always the first circuit built with new process technology for exactly that reason.

        On the other hand, more complex circuits nearly always run significantly slower than ring counters. So if their ring counters are running at 110Ghz, then some simple communication circuit might run at 30Ghz. Depending on details.

        Moral: as usual, never just blithely believe these press releases are implying what they seem to imply.

  • Am I the only one who'd like to smack the smirk of that Dell Kid's face?

    Maybe it's just the disgruntled ex-Dell employee in me . . .

    • The only one willing to hold back to that degree, perhaps. The rest of us will be satsified, only at the very least, with killing him in slow and excrutiatingly painful ways. Sleep deprivation for a week by forcing him to watch his own ads would be a very good start.
  • by Anonymous Coward on Monday February 25, 2002 @05:32PM (#3067099)
    The fastest possible digital device is
    an inverter with a fanout of one. That's
    what IBM was measuring.

    In a general purpose system, the clock rate
    is 15 to 20 times slower than that. This is
    because each pipeline stage runs things through
    several layers of devices, and the fanouts
    (and fan-ins) are larger than one. That is,
    each signal has to be sent to several places.

    And that's ignoring signal propagation. We used
    to ignore that, but recent designs don't. The
    EV8 Alpha was going to have caches of its
    registers. Yes, you read that right - each
    execution unit was going to keep a small cache,
    containing copies of stuff it had recently
    sent to the registers.

    So IBM's announcement doesn't imply 100 GHz
    system clock rates. More like 6 or 7 GHz.
  • long it will be before we hear 'dude, you've got a 110GHz Dell!'

    I REALLY hope that that ad campaign will be long gone. Personally I'd just assume have 'Steve' die a slow horrible death when they release the 2.5ghz.. and he's burned alive by the heatsink on national TV...

    But that's just me...
  • 110Ghz? (Score:3, Funny)

    by Fulcrum of Evil (560260) on Monday February 25, 2002 @05:45PM (#3067211)

    Oh well, time to ship all our old, slow, 1Ghz machines to some riverbank in China

  • by 2nd Post! (213333) <gundbear&pacbell,net> on Monday February 25, 2002 @05:47PM (#3067226) Homepage
    But if it's an IBM process, it's more likely to see the light of an IBM processor; say, Power series or PowerPC series.

    Meaning you may see this on a Mac first, rather than a Dell.
  • by swordboy (472941) on Monday February 25, 2002 @05:50PM (#3067242) Journal

    Correct me if I am wrong but aren't we limited by the speed of electrons at some point in the near future. How far can an electron travel in one second? How does this affect die size?

    Sure, anyone can shake a stick 110 billion times per second but this doesn't mean that the stick will do anything productive.

    As a side note, I think that it would be ironic and appropriate that Intel name their 4.7Ghz chip the "PentiumXT" as a funny play on the AthlonXP and the 1000 fold improvement over the 4.7Mhz XT processors of yore.
    • Sure, anyone can shake a stick 110 billion times per second

      Wow. I knew /.ers were a bunch of wankers, but I didn't realise the level of accomplishment.
    • You're not limitted by how fast an electron can move, exactly. In fact, electrons move VERY slowly in common situations - the drift velocity in home wiring can be several feet per *second*.

      When you shove a few extra electrons in one end of a wire, the charge pushes a few electrons that were already IN the wire down a little. And they push some down a little, and they push some down a little. Just like standing in a tight line at the movies, and shoving the guy in front of you - it takes a little bit of time to propagate all the way down.

      So the real question is "If I shove an electron in this end of the conductor, how long before I get one out the other end?" The two things that determine that are (1) the nature of the conductor, and (2) the length of the conductor. By keeping the amount of circuitry on the IC very, very small (which they assuredly did), the propagation time from one end to the other drops proportionately.

      However, even beyond just making the die smaller, they are working on making materials propagate the electrical charge more quickly - recently, someone (probably IBM) showed that by using a stressed crystalline lattice, they could significantly decrease the amount of time it took to propagate from one end to the other.

  • by Bobba Mos Fet (561744) on Monday February 25, 2002 @05:52PM (#3067253)
    This article is crap. If you're a real EE who knows about this stuff, please enlighten the rest of us by answering some questions: 1. I'm a little confused. Did IBM demonstrate a networking chip that runs at 110 GHz? Or did they merely demonstrate a ring oscillator type circuit? 2. I was under the impression that, to reach such high speeds, you need something like an HBT. Am I right? Is this circuit based on HBTs? 3. If this circuit is based on HBTs, then why are people talking about Pentiums and Athlons? No way in hell you could implement a VLSI (or rather an ULSI) circuit with HBTs. Am I missing something?
    • by dmlb (123349) on Monday February 25, 2002 @07:04PM (#3067666) Homepage
      Okay, so I'm a real EE who design in IBM SiGe processes 5HP and 6HP.

      1) IBM did demonstrate a ring oscillator.

      2) These are IBMs latest SiGe HBT transistors, targetted for the "8HP" process. At present, 5HP and 6HP are in production and producing ICs - a lot of GSM cell phones will have IBM silicon in them. 7HP is coming on line.

      3) Yup - these process are not directly for PC processors. The processes are targetted at RF, electro-optical, high speed data etc. They have SiGe transistors and CMOS. The SiGe is typcially used as a front-end, e.g. 10gigabit mutliplexers and laser driver/demultiplexors and diode detectors for optical links and the CMOS does the back end processing - e.g. line equalization etc.

      In addition, this is not the fastest semiconductor circuit. For many years people have been using semiconductors at tera-Hz for microwave stuff (granted maybe not ring oscillators but certainly parametric-active amplifiers). I worked on 94GHz radar systems over 10yrs ago that used active semiconductors (IMPATT and Gunn GaAs oscillators).
    • 3. If this circuit is based on HBTs, then why are people talking about Pentiums and Athlons? No way in hell you could implement a VLSI (or rather an ULSI) circuit with HBTs. Am I missing something? Somebody needs to tell These guys [] that you can't do a VLSI design in HBTs. Google fails to find me the Exponential 705, a PowerPC using bipolar current mode logic (CML). Didn't quite make it to market because the manufacturers couldn't bring the defect density down fast enough before IBM and Moto came in with the 750/740. Maybe the X705 proves your point? Are you stating that the VLSI in HBTs is not cheaply manufacturable? My personal opinion is that CMOS will not see us to the end of time. We may need to go BiCMOS, bipolar, or something else new and different. 2GHz CMOS processors are never in the DC state where CMOS saves power, they're 100% in the AC state charging and discharging capacitive loads, and leaking current like seives. If we went to bipolar CML, we'd see current like a 1.5GHz CMOS processor, whether we operated it at 500MHz or 5GHz.
  • they will remove circuit that allow to overclock these babies! Just as Intel and AMD did! Can imagine what THG or Anand will say...

    Seriously people, methinks it some sort of error there, somebody put too many zeroes.

  • by MavEtJu (241979) <slashdot AT mavetju DOT org> on Monday February 25, 2002 @06:02PM (#3067312) Homepage
    Dear Diary,

    Life can be hard if you're a 110GHz computer. It wasn't until my 3.168x10E15th clockcycle that there was a movement on the mouse and I had to present a password-requestor on the screen. That might look nice, but I had to wait several million of clockcycles before I got all the needed information from the memory. Memory is sooo slow these days, I recall stories from previous generations that you could have the data the next clockcycle after you had set the address! The downfall started when but right now it's waiting waiting waiting.

    Fortunatly the password typed was wrong, so I had the fun of producing a beep for 44 billion clockcycles. It sounds an impressive length of time, but I got bored after about twenty million clockcycli and I changed the tone-height a hertz or two. That'll teach them to make these stupid mistakes!

    Yeah... life is as good as you make of it. Hmm... an interrupt. Hold on. Back. Well, 80 clockcycles for that... Stupid optimized code. How much more before we get another timer-interrupt? Aaargh, still more than 80 billion clockcycles...
  • by dinotrac (18304) on Monday February 25, 2002 @06:28PM (#3067440) Journal
    I see lots of EE types checking in. I'm no EE, not even an E, though I've got a serious affection for DD's anytime I see them and my feet are EEEE wide.

    You guys who are saying this is impossible or impractical are in for some real egg on your face, though it's hard to say when.

    I managed to spirit one of these out of the IBM labs and they are fast! In fact, they're so fast that you've got to start them up tomorrow in order to do something today, which is ok, because, once they crank, they start delivering yesterday.

    Very cool. I just had Isaac Newton help me with a couple of things. By tomorrow, I should be looking up da Vinci, unless I get careless and work my way all the way back to Pythagoras.

    Of course, it's tricky staying one step behind the IBM guys. They came by for me yesterday, but I hadn't started up yet. They almost got me last month, but I gave 'em the slip the year before.
  • DDR22000000 (Score:2, Funny)

    by corren (559473)
    I'm so looking forward to upgrading my memory again...DDR 22000000 here we come!
  • Not that far fetched (Score:2, Interesting)

    by F.O.Dobbs (17317)
    According to Moore's Law we should hit 100 ghz in about 9 years (assuming 2ghz*2^6).

  • Sure, companies can produce these superfast chips for unbelievably high data transfer rates. But when will even a tiny fraction of this bandwidth ever reach into the ordinary home or small business? My understanding is that there now is enough fiber in the country that everyone could be wired for 100Mb/s ethernet, if we could somehow bridge the few miles.
  • At the Intel developper forum, Craig Barrett of Intel gave his predictions for what will happen with real-world CPU's in the next 15 years. Intel has historically come pretty close to fulfilling their "predictions", so I have at least a little confidence in this. The details are at the bottom of this [] page, but here are some tidbits:

    • 2 billion transistors
    • 30GHz clock frequencies
    • 10nm (0.01-micron) transistors
    • Processing power of 1 trillion instructions per second
    • Built on 300nm (12") wafers eventually moving to 18" wafers and beyond

  • Marketing Hype!!!! (Score:2, Interesting)

    by pagercam2 (533686)
    Intel and IBM and others gain headlines every few weeks with these new mirical technologies, and evryone (who isn't technical enough), assumes that means that 100GHz pentiums (or put yor favorite processor here) will be out by Xmas. Transistors need to be atleast 10-100 times faster as a ring oscilator as they can be for a reliable gate (AND, XOR, NOT ....). Oscillations are sine waves, digital gates require sharp transitions so you need to be a minimum of 10 slower to get reliable timing characteristics. There is also a world of difference between getting one transistor to work in the lab in nice quiet conditions and getting 400 million transistors working together on a chip (ALU, MMU, L1 cache, L2 cache ......) by the time you factor all those and having balanced timing across a chip, means that a simple circuit at 100GHz yields a produces a 10GHz processor. Intell supposedly has P4 at 3GHz already so just to stay competitive 10GHz will be required in a couple of years no big deal, but certainly it will be a couple more years before 100GHz chips surface. The problem has been and continues to be logic is getting faster, but memory is only inching ahead, its like having a dragster that can hit 300MPH, but not having any roads without curves.
  • The non-Silicon circuits have always been a factor of ten faster than CMOS, but also one to three orders of magnitude less dense. Many of the 1980s/1990s supers put critical circuits in GaAs. The early Risc CPUs were just a few tens of thousands of transistor with very simple instructions and making the compiler do the work. For example, hardware "multiple" was off-loaded to software. Perhaps IBM might offer a simple CPU in exchange for speed.

The end of labor is to gain leisure.