Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Intel

Intel Claims 10Ghz Transistor 116

Professional Wild-Eyed Visionary writes: "Intel has developed a new CMOS chip technology that cranks out 10Ghz, 400 million transistors per chip, with each transistor only 3 atoms thick, previously thought impossible. See story at Dial Electronics " While this story's rather fluffy, it makes it sound like Intel is a few years ahead of it's earlier projection of reaching 10Ghz by 2005. Of course, maybe they meant integrated into actual chips;) (in which case 2005 still sounds nice).
This discussion has been archived. No new comments can be posted.

Intel Claims 10Ghz Transistor

Comments Filter:
  • that sounds something like the futuristic, intergalactic internet that Orson Scott Card writes about in "Speaker for the Dead" and "Xenocide" (both sequels to "Ender's Game")

    I read them a long time ago, but something about how it was quicker for the information to be manipulated on this vast, multinoded energy based network instead of the silicon circuitry of typical hardware.

    Who knows? Maybe Card's insights were more than just a really good read?

  • > That's why I said non-radioactive lead :) Just
    > purify it and get rid of all the isotopes that
    > do decay and you're left with something pretty
    > hard to get anything through.

    [Ignoring the fact that lead is electrically conductive, and would therefore make a really bad chip casing...]

    This would mean that the material that your computer was built from would have to have its isotopes separated using a centrifuge or a calutron. This would make your computer pound for pound the same price as weapons-grade uranium since the same process would have to be used (though for the opposite effect). Anybody know what the going rate for U238 is? I'm afraid my Sears catalog doesn't list it. ;-)
    --

  • As we've come to expect from Intel, they are doing a tremendous amount of R&D in their fab labs. Intel has some of the best fabrication technology in the business (perhaps only IBM's is better). But when's the last time you heard of Intel actually doing anything revolutionary with that fab technology? That would be 1975. Since then they've only made the same processor over and over again. Seems like such a shame to let all this technology go to waste producing the same processors they made when they launched the company... Will the new IA64 technology be any better? Maybe. Early returns from people working on the architecture seem to indicate that it's just as much of a bitch to work with as x86, but perhaps it's too early to tell. Of course, that technology is 2 years late already anyway.
  • The transistors feature structures as small as 30nm in size and three atomic layers thick. Smaller transistors are faster, with Intel claiming the device could eventually pave the way for science fiction technology such as instantaneous, real-time voice translation.

    I can see it now... They end up using the first test model of this marvelous processor at a UN conference, and as you said, the electrons jump into the wrong stream... The following ensues:

    George Bush: We welcome Russia into our boussom!

    Translation: We (electrions start jumping) are here to inform you that we are taking over your weakling country!

    *grins*

  • We normally think of cosmic rays as something that causes bit rot (though in practice it's alpha particles). In a chip that has transistors only 3 atoms thick, would this radiation cause physical damage instead? If so, we'd need to think about employing a lossy grid of gates, so that a few failures don't kill the processor

    I found this:

    "Recently there has been increased emphasis on radiation effects in space due to an increasing number of satellite launches for commercial and defense systems. The natural space environment can damage electronics because of total-ionizing-dose and single-event effects (SEE). These are caused by the high energy electrons, protons, and heavy ions that are intrinsic to the space environment due to cosmic rays and the Earth's radiation belts. SEE due to cosmic rays and high-energy protons can lead to hard or soft errors in many types of devices and ICs. SEE are even possible in avionics and ground applications of advanced microelectronics with submicron feature sizes. SEE can cause failure at any point during a system's lifetime due to one inopportune particle strike, if circuits and systems are not suitably designed, tested, and built. Total dose effects accumulate over a system's lifetime, and can lead to premature performance degradation and system failure."

    There are some interesting links on this at the Sandia Labs website here [sandia.gov]. Some of these go to sites that are a bit encyclopedic.

  • But thinner vertically means it will be cooler and require a lot less power.

    -Chris
    ...More Powerful than Otto Preminger...
  • I was just doing some light math... 1 GHz is 1e9 cycles per second, and the speed of light is 3e8 m/s- so in a single clock cycle of the latest processors, light can only travel 10cm (less, I know, b/c it's electricity, but I'm obviously not an EE) 10 cm is fine for a single chip, but...

    When you get up to 10 Ghz, the distance is only 1 cm- and aren't your typical Pentiums and Athlons bigger than that?

    So how fast can they realistically improve clock speeds before going back to the drawing board?

  • It should be well shielded by a solid chunk of Aluminum on one side (heatsink) and a solid chunk of copper on the other (Motherboard Ground Plane). With a multi-layer motherboard (as they all are) the clock and high speed connections can be sandwitched between motherboard ground plane layers to prevent glitches (crosstalk) and meet FCC requirements. These are not high power transistors feeding a 5/8ths wave antenna designed to radiate a strong signal as a phone. They will be poor compitetion with a phone just like the 60 to 166 Mhz stuff of old didn't screw up your FM radio and TV channels much. FYI US TV VHF Low = 54-88 MHZ (ch2-6) FM band = 88-108 MHZ, Mid Band Cable TV A-I (ch14-36) = 108 - 156 MHZ. Interferance was slight and only in weak signal areas.
  • Very rude to reply to self, but I ought to.
    The EPR gedanken experiment disproves _local_ hidden variables, there are non-local theories which are too confusing for me, have not been disproved. See the sci.physics FAQ for more info (there are sci.physics mirrors everywhere, but rtfm.mit.edu is a useful one to rememeber for access to any FAQ).

    FP.

    --
  • by commandant ( 208059 ) on Sunday March 04, 2001 @06:51AM (#386298)

    Actually, academics have created 100GHz transistors out of GaAs. 10GHz isn't that great compared to these ultra-fast ones

    However, the distinction may be that this is the fastest corporate-built transistor, and it might be the first semi-integrable one. I don't know the details of either development.

    Maybe this is using Si? I forget the frequency limit of silicon, but this may be the fastest silicon transistor ever built.

    A new year calls for a new signature.

  • by peter303 ( 12292 ) on Sunday March 04, 2001 @07:07AM (#386299)
    It may max out at 10GHz or so.
    However gallium arsenide, indium something,
    have potential considerably beyond 10GHz and
    are being used for high speed D/A and optical
    connections. The problem with the non-silicon
    stuff is they are harder to fabricate in very
    high integration. They tend to be two or more
    integration genrations behind CMOS.
  • Indeed.
    One thing that most people overlook is that Moore's Law is _not_ about processor speed, or throughput, but is actually about _gate density_.
    Therefore we should hope to see the functional blocks become smaller as time progresses, so that their output is still available before the next clock edge where it will be routed to the next functional unit in the pipeline.
    Note - frequencies have increased faster than densities, so at the moment it looks like it's a losing battle, however, this will simply force chip designers to come up with more fine-grained functional units (and possibly to expect mutiple clock tick latencies between some of the functional units). For example, DEC in their Alpha chips were looking at this kind of design, and AFAIR they were the first people to demonstrate the >1GHz general purpose CPU _many_ years ago (not a production system, a specially cooled unit as proof of principle), which bears out the correlation. (OK, it (the 21164) never reached production at that speed, but what the hell, they had newer chip designs to work on instead).

    THL
    --
    • Large relative position uncertainty like you described only applies at the sub-atomic level. An entire atom has a predictable position in space and time.

    IANAP (physicist), but I believe that there is some (albeit small) uncertainty with atom positions. I believe that tunneling of hydrogen atoms is how fracto-fusion works. Now, it may well be that it's greatly more probable with a hydrogen atom than a helium atom (and from what I understand, it's not too common with hydrogen atoms), but it does occur.

    • Don't worry, your dinner table will never re-materialize a meter from where you were about to set your macaroni.

    Never is too strong. There is a finite probability that it could. It might be so unlikely that it would occur, on average, once in 5 billion ages of the Universe, but it could happen.



    ---

  • Very well worded ;-)

    That's why I didn't mention them initially, as they're in that grey area at the edge of science.
    They are inelegant, i.e. lack one of the qualities that appeals to the scientist in me. They also have been formulated in such a way that a simple mathematician such as myself cannot fully understand them, so I can't even make a judgement from a position of knowledge.

    OK, OK, I'll admit it, I think they're a hack too!

    FP.
    --
  • Offtopic, but while it is true in context voice recognition is the best approach, it would be possible to incrimentally analyze the sentence as it is being said, thus getting a rough guess at first to having it pretty much set before the last word is said. The last word would solely finish it off. It wouldn't be dead on voice translation, but at the end of the sentence, it would be "instanteneously" recognized.
  • As someone painfully familiar with grammars, I can tell you your absoultley right :) ...
  • They may appear inelegant, but according to somebody who has a lot more right to say than I do (a PhD in Applied Mathematics from Brown who I work with) there are substantive reasons to believe that nonlocality and some corresponding violation of conservation of energy on quantum scales can provide a "more elegant" solution than the standard approach to quantum mechanics.

    Note that I am not an expert in this area by any means and only have one (albeit very intelligent) person's expertise to argue from (and I know his position on this is not widely popular or accepted). I do know that EPR results in a paradox (I remember this much from my undergraduate degree in physics).

  • Actually, a 712-GHz device was published in the early Nineties. It was an antimonide-based nonlinear amplifier (not really usable for switches). 1 THz devices do exist now, but they are based on AlSb-InAs heterostructures and as such are pricey.

    Here's an abstract from 1991:

    "Oscillations have been obtained at frequencies from 100 to 712 GHz in InAs/AlSb double-barrier resonant-tunneling diodes at room temperature. The measured power density at 360 GHz was 90 W cm-2, which is 50 times that generated by GaAs/AlAs diodes at essentially the same frequency. The oscillation at 712 GHz represents the highest frequency reported to date from a solid-state electronic oscillator at room temperature."

    from E. R. Brown et al., Appl. Phys. Lett. 58 2291-2293 (1991).

  • atom movement....hmmm, maybe this is why i can never find my keys....
    put them down somewhere, they randomly space shift somewhere else. i'll hafta remember that,
    it's a good excuse.

    Drach

  • What you say, most definitely true in German is.

    And this is why to human translators with German trying to deal listening funny is.

    M
  • nahhhhh, Intel will come out with a 10GHz processor about 2 weeks after Microsoft releases there next software release that makes your PIII 933 look like a 386/33....and at the rate of Microsloth, that should be in about a year or so after Windows 2000 Service Pack 9.

    .kb
  • Well you stick something round the lead. Or something... And yeah, it would be expensive, but it would work. How much would it cost your business if data got corrupted randomly?

  • The P4 is not a total failure, it's like the Pentium Pro : no software can really show today what the core is capable of. That doesn't mean the core itself is worthless, just that some people need to recompile their apps...
  • The gap is on the energy scale not in "real space".
  • Yeah, but the most probable thing always happens, doesn't it?

    No, otherwise its probability would be 1. If you prepare a system that has a 10:1 probability to be in a given state (say, you send light and arrange for it to be polarized at about 70 wrt to an analyzing polarizer) and repeat many times the experiment of measuring whether it is in that state (send many photons and detect how many pass through the analyzing polarizer), you'll find it is one time out of ten on average (10% of the photons will get through).

  • > Lead isn't radioactive.

    Everything is radioactive. You, me, a lead safe, the stuff they make chip casings out of, and even CowboyNeal. If the material came from Earth, and wasn't specifically treated at Los Alamos or some other weapon's factory, it will have the same proportions of radioactive isotopes.
    --

  • Your physics seems rustier than mine.

    Large relative position uncertainty like you described only applies at the sub-atomic level. An entire atom has a predictable position in space and time. Need practical proof? Who has not seen the single-atom logo etches IBM and other research departments have been showing over the last decade? Or how about the nano-machines that are just a few atoms thick reported here on /. and other places.

    Don't worry, your dinner table will never re-materialize a meter from where you were about to set your macaroni.


    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~ the real world is much simpler ~~
  • But at a subatomic level, it's already dettermined which way the light will be polarised. It's like tossing a coin, you think it's random but at the atomic level it's predetermined which way it goes.

    What's commonly known as probability is just how likely something is to happen in a large sample, not how likely one thing is. It's not the same thing.

  • The gate oxide is 3 atoms thick. The gate oxide is just the insulating layer between the channel and the gate. It's only one *part* of a transistor (the smallest one). You can see a schematic of a MOSFET here [colorado.edu]. The gate oxide is the yellow structure. Obviously, the transistor is larger than the gate oxide.
  • Also, in order for the connects to function as an antenna their length has to be of comparable size of the radiation wavelength, which is 3cm for 10 GHz (not taking into account the dielectric constant of the surrounding material). Anyway, 3cm is much larger than typical connects on the die, so they won't be efficient antennas. Connections on the motherboard will be more problematic though.
  • By 9.999998% pure, you mean that 9.999998% of your atoms are of the correct elements (Gallium and Arsenic). What element an atom is depends on its number of protons (which gives the charge of nucleus) - Ga has 31 and As has 33. Atoms of the same element may have different numbers of neutrons and hence different masses - these are known as different isotopes of the element. E.g. carbon (6 protons), can have 6 neutrons and a mass number of 12 or can have 8 neutrons and a number of 14. Different isotopes may be stable or decay at different rates. E.g. carbon 12 is stable, whereas carbon 14 decays with a half life of about 5000 years.
    Seperating different elements is relatively straightforward (though getting anything 9.999998% pure probably ain't that easy) since they will have different chemical properties. Seperating different isotopes of the same element (e.g. to make uranium with mostly U235 which is the stuff you need for bombs) is more difficult since all you have to work with is the small difference in mass between the isotopes, so you might use a centifuge or some kind of diffusion based process.
    Removing the radioactive isotopes from your lead to make non radioactive lead would need processes similar to those for producing weapons grade uranium, rather than producing pure GaAs (which will contain Ga and As atoms of a mixture of isotopes).
  • There is something called clock trees which allow every section of the chip recieve the same clock signal at the same time. This is done by dividing the clock lines in equal partitions until you reach each module in a binary tree manner. That way all the delays are taken into account. Also each module is pipelined like an assembly line where at each step a module only does one stage of an instruction execution, but from the outside world it's cranking out on average one or multiple instructions per cycle. With these and other tricks, delays in the propagation of signals is relatively no big whoop.
  • ibm had devolped a way to make them one atom thick.. involvinggrroved channels and what not..i dont remeber them actually trying it out...
  • When you get up to 10 Ghz, the distance is only 1 cm- and aren't your typical Pentiums and Athlons bigger than that?

    Pipelining takes care of that; even if the information cannot physically cross the chip end-to-end within one clock cycle, what counts is the time required to cross one set of logic gates (one "stage" of the pipeline). Even though one elementary operation takes more than one clock cycle to complete, you can "feed" the pipeline so that you actually get one result every clock cycle except at the beginning.

  • hey i was wondering if you could show me some place where benchmarks of G4's outperforming intel/amd chips are? not to say i dont believe it..jus that i would like some proof of it
  • Three atoms thick? Wow. That seems to be pretty much the end of the line in terms of standard chip technology. Maybe they should start working on making the atoms themselves smaller...
  • The manufacturing cost of weapons grade fissonables has little to nothing to do with the cost of said material.

    1. Consider the artifical scarcity. President Carter signed an executive order that specificly disallowed reuse of fissonable fuels because it would lead to weapons grade materials. US nuclear energy has never recovered. Perhaps the world is a safer place.

    Consider the storage costs. My university recieved fissonable materials for free, and now can't afford to store them or dispose of them.

    The "cost" is *not* one of manufacturing.
  • See http://citeseer.nj.nec.com/294779.html for some research at Stanford on technology that especially applies to building very fast switches. This is standard CMOS at 50 Ghz. The researcher used to work for Intel on the Pentium chips -- I can remember when he was working on trying to break the 1 Ghz barrier... Not sure if the record has been broken, but he told me a few months ago that he had made the fastest processer to date, which was 20+ Ghz. He said built a radar with it...
  • you can "feed" the pipeline so that you actually get one result every clock cycle except at the beginning.

    Mispredicted branches also cause a significant performance drop with pipelining. The CPU doesn't know for sure whether or not it's going to branch until the branch reaches the end of the pipeline. Until then it has to more or less guess based on previously results (or in the simpler case, just always predict "taken" or "not taken") and, if the prediction is determined to be wrong, it must clear out all the partially executed instructions.

    Another performance hit is loading data and then attempting to immediatly use that data. Since the load operation takes a couple cycles (memory is relatively slow compared to a CPU), the operation that wants to use that data has to be stalled, creating a gap of a few null cycles between the load and that operation. It's not as bad as a mispredicted branch, but it can be avoided by a smart CPU/compiler combination that places the load operation a few instructions earlier and then works on other stuff while it waits for its results.

    Pipelined processors are nifty stuff. It's surprising how conceptually easy a simple one is.

    And on a random sidenote, my epiphany on pipelining came when I realized that it's kind of like a fast-food drive-through with multiple windows. A given customer may have a higher latency (because they have to go through that whole start/stop, start/stop non-sense), but the throughput is higher, which sounds like it's only benefiting the store at the cost of the customers. But then I realized that the higher through-put meant that there was less of a backup of people waiting, which benefited the customers. Of course I'm still trying to figure out the corollary for a mispredicted branch. I one day hope to be driving by only to see a little guy in a bulldozer pushing cars out of the line. Then my life will be complete.

  • I don't know about the rest of you, but until I see an intel announcement I won't believe it. Makes you wonder...if they have a process capable of .03um and 10GHz, why couldnt they get p3's above 1.13? Maybe there is some validity to this claim...perhaps they have theoretically shown that it could work...but i doubt they will have the capability of making it happen by 2005.
  • Maybe you are referring to the "hidden variables" interpretation, which is quite controversial and almost debunked.
    In fact it's proven entirely incorrect by the avent of Bell's theorem testing experiments. Hidden variables theories obey a particular inequality known as Bell's inequality which is to do with the probability of correlations between two entangled particles. Quantum Mechanics violates this inequality, and, as has been measured in several experiments, so does the real world.
  • There were a series of articles in Nature last summer about the ultimate physical limits of comnputation. Specifically one of them looked at the ultimate 1kg laptop. If you want serial processing, then the answer is a very carefully structured 1kg black hole, which can do about 10^16 quantum bit operations on a 10^16 qbit "word" in the 10^-19 second lifetime of the black hole. Of course the power consumption, cooling and containment problems are rather severe. If you don;t play clever games with reversible computation and very very strong mirrors, then you need a supernova to power the thing.
  • Indium phosphide (InP).

    You are right, GaAs and InP can be very fast, but they are harder to fabricate: smaller wafers (hence fewer chip fabrication rates), higher costs, and more generally a few decades of technology development to catch up compared to silicon.

    And this is problematic not only for fast electronics but also for active optical components, especially semiconductor lasers and amplifiers; silicon is a poor emitter of light due to its indirect-gap structure (in an E-k diagram, the bottom of the conduction band is not directly above the top of the valence band), so we have to use GaAs or even more expensive InP, especially for lasers in the 1.55micron wavelength range, i.e. the choice wavelength for long-range transmission over fiber optic...

    (And, as with electronics, some researchers are trying to push the limits of silicon: there have been recent results with Si quantum-dot structures which were able to lase. But wouldn't it be easier in the long run to push GaAs technology?)

  • We normally think of cosmic rays as something that causes [science.uva.nl] bit rot (though in practice it's alpha particles). In a chip that has transistors only 3 atoms thick, would this radiation cause physical damage instead?

    If so, we'd need to think about employing a lossy grid of gates, so that a few failures don't kill the processor.
    --

  • by keesh ( 202812 ) on Sunday March 04, 2001 @03:54AM (#386333) Homepage

    IIRC, there have been 8GHz transistors (or mosfets) available for a few years now. Nowhere near that small, but they exist. I think this is more a publicity stunt from Intel, trying to claw back some custom from AMD.

  • Is it me or does this sound very similar to the article Intel Says 10GHz By 2005 [slashdot.org]?
    2005 is just too far away for me to get excited anyway..

    --
  • by roguerez ( 319598 ) on Sunday March 04, 2001 @03:56AM (#386335) Homepage
    My physics is a bit rusty, but if I'm not mistaken these 3-atomic layer thick transistors must have some problems because at this level the predictability of atom movement comes into play.

    Every atom has a certain frequent movement. Objects consisting of a large number of atoms stay in one place because the movement of all those atoms combined adds up to zero.

    Theoretically, it's not impossible that your dinnertable would suddenly be a couple of meters away from its original place. But it's the statistics that make such an event impossible in practice.

    When creating objects very small - consisting of only a few atoms - the movement of every atom get's more important. Chances that the movement of one or more atoms influences the behavior of the object itself (in a way that its behavior is not predictable anymore) are a reality when creating transitors this small.

    Therefore I'm amazed by the comment of the Intel scientist that these transistors behave just like other - bigger - devices.
  • by stripes ( 3681 ) on Sunday March 04, 2001 @03:51AM (#386336) Homepage Journal

    A 10Ghz transister can only make a 10Ghz CPU if each pipeline stage (plus sync overhead) is only a single transister. Which is pretty impossable (a simple flip flop is several transistors, an adder is a big pile of them). As I recall the failed 500Mhz PowerPC that some compony like "eXponential" was making was thought to be extreamly aggressave with only 50 or so transitor delays between pipe stages (and some pipe stages were mostly wire delay to get the signals from one part of the chip to another!). Or maybe I'm confusing that with sombody or others barrel processer style MediaCPU (also out of bisness).

    Tiny transistors are wonderflu. Tiny fast transistors are more wonderful. But 10Ghz transistors are no where close to letting you make a 10Ghz CPU. In fact it might be slower then current state of the art (but smaller). Something in this story doesn't add up.

  • It's amazing isn't it really. If you know that you've got no knowledge of the field, THEN DON'T TRY AND GIVE A TECHNICAL RESPONSE !!!!
  • "As our researchers venture into uncharted areas beyond previously expected limits of silicon scaling, they find Moore's Law still intact."

    don't worry they are not going to roll out 10ghz tommorrow night ..
  • by df1m ( 301007 ) on Sunday March 04, 2001 @03:56AM (#386339)
    They don't say the transistor runs at 10GHz, they say it is very small, and will allow the creation of chips that run at 10GHz.

    - dave f.
  • roguerez wrote:
    Every atom has a certain frequent movement. Objects consisting of a large number of atoms stay in one place because the movement of all those atoms combined adds up to zero.
    That's not right. Solid objects are held together not by luck, as you seem to be saying, but by electrostatic (chemical) bonds that are formed when atoms share electrons to form molecules. These molecules can become large and visible by themselves (crystals), or become intertwined and lock together (like plastics). These chemical bonds, while somewhat flexible, limit the range of motion of individual atoms. Atoms and molecules do vibrate, and it is this vibrations that we sense as temperature.

    Now what you say is true of fluids, and ideal gases in particular. But not solids.

    However, like you said, migration is an issue in certian situations. When atoms are not held in place by chemical bonds they can indeed float around. Gold from a plated PWB will leech into the lead of a solder joing, brittling it.

    I'd be more concerned about ESD. With such a thin gate insulator, these FETs are going to be extremely ESD sensitive. They're going to suffer punch-through at very low static voltages, since the field strength in the region of the insulator is going to be astronomical due to the short insulator length [Field strength = (applied voltage)/(distance between charges)]

  • Doh.. that'll teach me to not read the article properly.. :(
    Ahh well, who wants karma anyway? :)

    --
  • by whanau ( 315267 ) on Sunday March 04, 2001 @03:52AM (#386342)
    Intel realise that they are no longer the kings of the chip game. With their recent P4 release being a total failure, it is only a matter of time before AMD takes over their current position in the market. Releasing this kind of "news" only shows that they are simply trying to play the pr game, rather than actually focusing on proper R and D like AMD and Transmeta
  • I'm currently living in 2005 and I just wanted to let you-all know that as long as I have my SS#, DRVLIC#, and special Credit access code properly keyed in, Windows '06.42 boots 'real' fast. The future ain't so bad. "May have a sip of water now, please? No? OK...."
  • But the actual article referred to doesn't say "10 GHz transistor", merely that it would open the door to building 10GHz CPUs.

    So the transistor might be much faster.
  • by Mr_Dyqik ( 156524 ) on Sunday March 04, 2001 @04:38AM (#386345)
    This is the frequency band that mobile phones use (GSM 900) so couldn't there be problems with interference, and public hype along the lines of mobile phone radiation.

    Also at these sort of frequencies you have to use microstrip waveguides to carry your signals, as standard wires don't work so good, so would interconnects and the like have to be redesigned?

    Anyway, most computers are limited by memory bandwidth nowadays, and 10GHz chips only makes this worse. To get performance up a lot it would probably be better to improve the memory clock by a factor of ten than the raw processor speed.
  • Well sticking a case on the processor would help. At least, as much as anything would. You could just imagine, tiny processors with a ten inch lead (as in the purified non-radioactive stuff) case round them.

    Come to think of it, I bet that's what the real reason for cooling fans is. They could easily make processors not get hot, but they make people buy huge cooling fans instead to make more money.

    Just remember, you read it here first.

  • I didn't think one atom was possible, at least for conventional stuff. I thought semiconductors worked because of a group of atoms (usually silicon or that ge thing I can't spell) with covalent bonding, where a gap appears for electrons to get through. So unless you put them in groups I'd have thought two atoms would be the minimum...

    Germanium (Ge)? Gallium arsenide (GaAs)?

    You are correct that you can't have a semiconductor with only one atom. Even several atoms can't make it because in fact the energy bands (between which the gap is) are made up of many discrete states, each of which has a given energy. There are about as many of these in a band as there are atoms in the crystal. So, to get real (quasi-)continuous bands on each side of the gap, you need to have a macroscopic number of atoms.

    Now, first, I didn't say that this IBM thing worked the same way as a semiconductor; I really don't remember the details and may very well be mistaken.

    Second, these single-atom or three-atomic-layer systems are never isolated, they are always on a whole chip of their own, and this is going to have an energy-band diagram.

  • If you can make 99.999998% pure gallium arsenide, why can't you make other things so pure?
    --
  • This is the frequency band that mobile phones use (GSM 900) so couldn't there be problems with interference, and public hype along the lines of mobile phone radiation.

    Actually most mobile phones operate in the 900MHz and 1800 or 1900MHz ranges, AFAIK.

  • A one atom transister would imply that the atom IS a transister. I think not.

    What the article says is that the transisters are 3 atoms thick and 30nm wide.


    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~ the real world is much simpler ~~
  • But at a subatomic level, it's already dettermined which way the light will be polarised. It's like tossing a coin, you think it's random but at the atomic level it's predetermined which way it goes.

    Not according to quantum physics, which states that particles are probabilistic even down to the subatomic level. What is deterministic is a system's wave function, which yields the probability of the system being in a given state at a given time.

    Maybe you are referring to the "hidden variables" interpretation, which is quite controversial and almost debunked (see this "Layman's guide to quantum physics [higgo.com]").

  • Wow. No, not at all. Quantum Mechanics is the set of physical sciences based entirely on the supposition that in fact atoms and subatomic particles actually behave probabilistically. Or rather, a series of observations in the early 20th century led physicists to no other reasonable explanation of what was happening in a wide variety of experiments - accepting into their theory base that in fact atomic events are basically probabilistic allowed the derivation of a wide variety of phenomena.

    Macroscopic samples may contain a large set of items, depending on size and type of measurement may generally be on the quantum (probabilistic) or macroscopic (observably deterministic) scales. But the macroscopic statistics don't affect the fact that when you get down to the quantum level, to the best of modern science's ability to explain, things are not deterministic.

    For the sake of edification, there are theories called hidden variable theories in quantum mechanics that attempt to remove probability from microscopic systems and posit that in fact we simply have insufficient knowledge about the way such systems really work. No such theories have been adequately proved to this point in time.

  • Ok, so intel have made transistors that are 3 atoms big.

    And the transistors will get smaller and smaller as always. But for a transistor to work, electrons has to be able to flow through it, right? And it must be able to alter its conductivity as well. So how small can these transistors actually be? How would a transistor work if it was smaller than 3 atoms - or even smaller than 1 atom?

  • Isn't GSM 900 == 900MHz ? And the bandwidth around this probably less than a MHz, even counting different channels in one cell.

    True, there can be 0.9GHz components in the data when the clock frequency is 10GHz, but that could be a problem even with present processor speeds.

    In case I'm completely wrong, please correct me :-)

    --

  • i agree the P4 is not a complete failure.. it beats amd in two things consistantly.. quake and dvd encoding.. still i think its pretty sad that they get beaten by amd chips on everything else especially when P4's are at such a faster core clock? and a higher price.. by the way not a whole lot of software needs 3.2 gigs of bandwidth..and i dont think that will change by Q3 cause thats when this core of the p4 is phased out
  • by fatphil ( 181876 ) on Sunday March 04, 2001 @04:59AM (#386356) Homepage
    Sorry, you are utterly off base.

    This is why Quantum Mechanics caused such a stir when it was first posited. Even some of the best minds in the world refused to believe that the state of something could remain undecided.
    The spin/polarisation/whatever _is_ unknown, and is described by a complex (x+iy) probability function, only upon measurement does the spin/whatever briefly enter a known state, but this precision starts to fade instantly. The real probability of it being in a particular state is the absolute value, or amplitude, |x+iy| = sqrt(x^2+y^2) of the complex wave function.

    For example, it has been shown that you can artificially keep particles with a constant spin by continually testing their spin. As you test it you get a true/false result, meaning that you've either got the spin you want, or you have the opposite. If you test it again almost instantly, the wave function hasn't had enough time to make the opposite state particularly likely, and so you almost always get the same spin result, time after time after time.

    FatPhil
    --
  • by generic-man ( 33649 ) on Sunday March 04, 2001 @06:25AM (#386357) Homepage Journal
    I trust independent research labs like Advanced Prototype Packet-Layer Engineering [apple.com] to do my benchmarks. They do quality work.
  • haha..nice i wonder though..when apple finally breaks 1 ghz will they still be faster then the 2 ghz chips amd/intel will be making
  • It could take years just to figure out the dynamics of this matrix!
  • You're thinking of pipelines, but there are no pipelines with 1 transistor wide stages :).

    Which doesn't really matter, since the article is talking about enabling CPUs to work at 10GHz, not about 10GHz transistors.


    ----------
  • Hitachi made 100GHz microprocessor elements 5 years ago, using superconducting technology. The first 1GHz procesor was made using this in 1990. Shame that they need liquid helium and that noone could write decent compilers for it
  • Signals do not bounce around the chip like that -- your "place and route" guys will try to keep things localized, so that register-to-register distances are much smaller than the size of the chip.

    Of course, that's not to say that it's not an issue -- it will make things a lot harder to implement. But, then, every time the clock period goes down, it makes things harder. Designers just sit down and find another way to do things faster :).


    ----------
  • I am a little confused by this 3cm number. I really don't know what I am doing, but if I were to calculate the wavelength of a 10 ghz signal i would need to know the speed it moved at.

    If I assume it is pretty close to the speed of light, then the wave length would be... oh. never mind.
  • when apple finally breaks 1 ghz intel will be making the IA64, and this silly game is over.

  • Quantum physics applies on all levels, however, (unlike for Newton's laws) physical effects are not indepenent of scale; things on scales in which delta x* delta y is comparable to planck's constant behave way differently than things on larger scales. Having said this, a few atoms can be arranged so that the uncertainty principle is not significant (for computing). There is nothing in physics preventing this on a scale of a few atoms. Eg. an atom in a crystal is in the vicinity of its lattice site with a very high certainty.
  • Why doesn't someone just take a P4-2ghz and remark it as 10ghz ? That's basically what Intel's going to do since they have lost the true definition of performance anyways. Just look at MMX, MultiMedia eXtensions; more like Masturbatory Marketing eXperiment. They will come out with another gimmick that runs at 40 bazillion hertz yet takes 500 clocks to do an XOR thanks to wait states.

    At this point, instead of increasing cpu speed, I feel it would be better to focus on SMP and tweaking the other parts of a PC. A faster northbridge and better system bus would relieve important bottlenecks in the PC architecture with a much greater impact than processor speed alone. Just look at all the speed freaks (myself included) who prefer to keep an older cpu but upgrade everything else.

    Here's my personal example : I have a Celeron 566 @ 850. Slow according to today's standards, but good enough for everything I do and it runs every game out there. On the other hand, I've got a Geforce2 GTS, 512mb ram, IDE-Raid 80gb, 8x Plextor cdr, Boomslang mouse, yadda yadda. My main CPU is just decent but I cranked everything else to the max. Instead of having a fast cpu that just spends more time waiting for the hardware, I have fast hardware that lets me work and play faster without needing to stay on the bleeding edge of AMD/Intel wars. Sure, I could visit my little chinese dude and his grotty parts shop to pick up an Athlon-C 1.2ghz with mobo and pc266 ram, and my Q3 framerate might jump from 110 to 130, but overall will I get anything done faster ? Only marginally, since the cpu will spend more time snoring while every other component chugs along as the same speeds since 1994. On the other hand I'll be the first guy in town to get my hands on a Geforce3, not because I'm a gamer, but because it will make a more noticeable _perceptive_ difference than spending the same amount on a CPU. I don't care what the benchmarks say, my PC feels just as slow as my P200 did three years ago, and my 486 before that. It's everything else that's been steadily progressing over the years; video, disk, memory, sound. That's what really makes a difference to my eyes and ears. GHz-wars just look good on paper and in Intel's bank account, nowhere else.
  • In Australia 10.5GHz is acknowledged as being an ISM (even though it is not on the ACA chart - a quick phone call will verify this). SO instead of using a door opener to transmit data wirelessly, the CPU could be overclocked, whacked onto the end of a wave guide. Just like using the space shuttle to blow leaves, this would be an inefficient way to have a 10.5GHz carrier signal generator. But knowing our Government, a grant would no doubt be given to investigate this idea further, whilst the 15 year old kid with a working prototype of a cold fusion chamber is told to nick off and stop bother the local politician for funds.
  • IIRC the speed of electrons in copper is about 0.3c. But I have forgotten the source of that data though, so if you are trying to make useful calculations that data should be looked up.
  • I may not understand what holds the atoms together but I do know that gravity holds my dinner table in one place.;P
  • by esonik ( 222874 ) on Sunday March 04, 2001 @03:58AM (#386370)
    Making flat structures (gates oxides) 3 layers thick isn't that hard. What's hard is to make them that thick over the whole wafer and to make a working transistor (they claim the latter). The lateral structures are 30nm which is approx. 100 atom layers wide. Reducing lateral structure size is a lot harder.
  • I am future man! I travelled 1 year into the future and picked myself up a nice 20 ghz processor for $50 bucks. My advice to all you humans is to wait a little while before you buy so prices can come down. And also get of carnival business.
  • They may have done one transistor (and actually I seem to remember that IBM had succeeded with a single-atom one), but for doing anything useful you have to pack several of them together... And the closer you squeeze them and the faster you ask the electrons to get between them, the more you are subject to the tunnel effect, that is, the less said electrons care about the paths you carefully etch for them.

    Indeed, the more energy they have and the thinner the isolation between "wires", the easier it gets for them to "hop" over the latter. By then anything can happen, bits leaking from one memory cell to the next, calculation errors...

    They may be on the right path, but the way to go is quite long.

  • Intel said the device would open the door for the development of microprocessors containing more than 400 million transistors, running at 10GHz and operating at less than one volt.
    Hmm, what do they mean.. Will the chip run at 10GHz, or will the 400 million transistors run at 10GHz??
  • But what it said above was that it was thorium etc. that was causing it. Why can't we get the thorium out of the plastics?

    THL
    --
  • Actually, academics have created 100GHz transistors out of GaAs. 10GHz isn't that great compared to these ultra-fast ones


    If my memory serves correctly (and it's normally pretty reliable), it was about 18 months ago, and they weren't just showcasing transistors, but were showing off shift registers operating at 100GHz. Still not a full-blown processor, but a necessary component. I think the details were published in `Nature' (because I was a subscriber at the time- go to www.nature.com [nature.com] if you've got an active subscription or £120 going begging). I can't remember much more though- it was pretty much "100GHz chips, Film at 11" non-news.
    I'm just going to be happy when my local shop delivers that dual P3-700 for the Casino-21 project (remember that?).
  • I didn't think one atom was possible, at least for conventional stuff. I thought semiconductors worked because of a group of atoms (usually silicon or that ge thing I can't spell) with covalent bonding, where a gap appears for electrons to get through. So unless you put them in groups I'd have thought two atoms would be the minimum...

  • >Well sticking a case on the processor would help.

    Nope, read the link [science.uva.nl] in my parent post. It shows that the main source of alpha radiation is from the chip casings. Not a whole lot you can do about that.
    --

  • The drift velocity of an electron in copper is about 0.0024 cm/sec . Drift velocity being the actual forward progress of a single electron.

    Now, because the copper wire is full of electrons, if you push one in one side, a different one will pop out the other shortly thereafter. Maybe that is the 0.3c speed you were talking about.
  • Yeah, but the most probable thing always happens, doesn't it? At the atomic level anyway. So whilst the atoms could be (and are) everywhere at the same time, there's one place they're most likely to be, so that's where they will be.

  • The chances of movement of an atome reduce drastically with decreasing temperature. There is an energy barrier that has to be overcome in order to make the diffusion step. For Oxygen in Silicon this is approx. 2eV which is very high (need several hundred degrees Celsius to trigger diffusion). In fact, one way to make thin oxidized Silicon films is to expose the Silicon surface to Oxygen and heat it (several hundred deg.). The oxygen will diffuse into the Silicon and form the oxide. The layer thickness depends on temperature and duration of the treatment. However the article does not say whether they used this technique to get the gate oxide.
  • Smaller transistors are faster, with Intel claiming the device could eventually pave the way for science fiction technology such as instantaneous, real-time voice translation.

    Somehow I can't see how the speed of the transistors can help with the fact that you usually have to wait until the end of sentences before translating, you cannot just do it word by word.

    --

  • Einstein Poldovsky Rosen Hidden Variable Theory leads to contradictions which can be physically demonstrated to disprove the theory. The "EPR" experiment, it is called, named after those three.

    FatPhil

    --
  • How about increasing gravity? A black hole computer? I guess we would need wireless networking for this to work...
  • by Perdo ( 151843 )
    Unless AMD gets to 10Ghz in 2004 in which case Intel will release a 10Ghz chip the next day, availability limited to 10 chips, half of which Intel will keep for developmental purposes.
  • I won't bother showing you the power specifications (which show that P4 uses much less power than Athlon) or the spec and stream benchmarks (which show P4 signficantly outperforming the Athlon, about 2x on specFP, and about 3x on stream).

    But as far as the assertion that Intel is an awful place to work, this is easily debunked by looking at Fortune's Best Places to Work list, where Intel is #41, ahead of every computer company including the ones you listed - HP is all the way down at #63, and Compaq and IBM aren't even in the top 100.
  • You're entirely correct, Heisenberg's Uncertainty Principle states that we can only come up with probabilities for the positions of sub-atomic particles (electrons, protons, neutrons etc.). AFAIK, there is nothing to suggest that atoms do not have fixed positions.

    At most, your table might rearrange itself enough to let a couple protons through, although even that is highly unlikely.

  • Mispredicted branches also cause a significant performance drop with pipelining. The CPU doesn't know for sure whether or not it's going to branch until the branch reaches the end of the pipeline. Until then it has to more or less guess based on previously results (or in the simpler case, just always predict "taken" or "not taken") and, if the prediction is determined to be wrong, it must clear out all the partially executed instructions.
    It is my understanding that IA-64 deals with this by having multiple pipelines so that multiple possible branches can be evaluated simultaneously. That way all that has to be done is to flush the 'incorrect' pipes once the needed data arrives. (This was in an old boot magazine column by Tom Halfhill).
  • I think they meant 400 million transistors running at 10GHz each, which would add up to a CPU running at 4 billion GHz. Of course, real geeks will be able to overclock this baby and have it running at 4.5 billion GHz or more.

Hard work never killed anybody, but why take a chance? -- Charlie McCarthy

Working...