Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
IBM

0.01 Micron Process? 101

hypo writes "According to a recent ZDNet article, IBM is developing a technique called "V-Groove", that allows the channel lengths of transistors on chips to be 10 nanometers (0.01 micron) and below. Currently, most companies use a 0.18 micron or 180 nanometer process. This is certainly a giant leap. The only caveat is that IBM is not planning to use this in large chips (i.e., processors) for 10 to 15 years. However, this is still quite revolutionary because most people thought that a 0.02 process would be the fundamental minimum. This all shows that Moore's law can perhaps hold true in the future. This article also discusses Carbon Nanotubes, which might research market faster than experts had previously thought."
This discussion has been archived. No new comments can be posted.

0.01 Micron Process?

Comments Filter:
  • by molo ( 94384 ) on Saturday August 12, 2000 @01:09PM (#860064) Journal
    Someone please correct me if I'm wrong (I certainly might be.. I'm not intimately familar with microelectronics engineering), but I thought what we currently associate with chip die processes are the trace widths, not the channel length.

    Trace width is the width of the conductors connecting different transisitors on the chip. This is important because a smaller trace width means that the whole chip is scaled down, including the spaces between the traces. This raises capacitance between parallel wires and causes the posibility of cross-talk.

    As for channel length, the article says:

    Channel length represents the distance electricity needs to travel through a transistor, shorter transistors lessen the distance traveled, delivering greater performance.

    While this is related to performance (specificly, switching timings), I am not sure if it is related to trace width at all. The ZDNet article may be mistakenly associating the two.

    Also, I think that one may be able to vary the trace width and the channel lengths independently. If that is the case, we may have performance increases from channel lengths even if we hit a wall when it comes to trace widths.

    Can someone with some microelectronic background clarify these issues?

    Thanks.

  • Encoding at under 20x? try to find the GoGo encoder webpage. Assuming you have a faster processor (I have an Athlon 700 running WinNT and I can encode at 30x), you should be able to encode quickly and nicely. The Win32 counterpart encodes at 160Kbps and pretty much rocks da haus compared to all my other encoders.
  • You are right regarding the plasma part.

    However, the process I described is also called bremsstrahlung. I used to work in the space industry (I am now doing astrophysics), and one of the reasons why thick Aluminium is not used as radiation shielding in space is because of bremsstrahlung effects from high energy protons.

  • What if you turn off optimisations on your C++ compiler? I know that VC++ does a good ammount of stuff toward the end of speeding up its output (it optimizes much better than C++Builder, for instance)

    Doesn't matter. Object Pascal compiles 10-100 times faster than Visual C++ with all optimizations turned off. The speed comes from a few places:

    1. C++ programs tend to be idiotic with the include files. A 10,000 program may include 500,000 lines of includes. Object Pascal has a much nicer module system.

    2. Object Pascal has a much cleaner syntax than C++ and doesn't need a preprocessing step.

    3. The Object Pascal compiler is a very nice piece of programming :)
  • this technology would more likely be used for cell phones, etc, not desktop computers if theyre just now making a transistor on this scale, i would guess more than 10-15 years. most large companies are using 5 year old technology, and this is something that would require a lot more changes than redesigning stuff from 5v to 1.8v
  • handhelds, cellphones, battery life could be greatly extended using lvds and this type of size reduction
  • The first place that this tech is probably going to go into is likely to be high-end tech. That's the kind of stuff where-- if you don't get it right when you sell it to them, they're going to have {m,b}illions of dollars worth of consequential costs to sue your ass off over.

    There's also a marketing issue -- As a company, you want to keep one step ahead of the competition. You also want to get the biggest bang for your research buck. If you get too far ahead of the competition, you won't be able to use, and make money off of, some of your other research. It's also nice to have an 'ace in the hole' for when they threaten to overtake you in another area.

    Finally there's the simple lead time for going from producing a .01Micron straight line to producing a 100-million transister CPU from said technology -- and doing it in good quantity with high reliability.
    -----
    That having been said, I remember a story from a Nortern Telecom tech about the (relatively) early days of optical fiber. One of the labs claimed to have produced a really high-caliber optical repeater laser (about the size of a large grain of sugar). The production of the units was fobbed off on a Japanese company because the company big-wigs didn't believe lab staff that it could be done well using local resources.

    Well the Japanese company messed up the order, (they weren't sensitive enough -- a prime specification) and the Exec turned to the lab and essentially said 'we need that order NOW -- Please do it with the lab equipment (no time to build a fab facility at this point).

    Well, the lab made such high quality units that they were TOO sensitive. They were reacting to noise from the other electronics (which wasn't expecting such high quality in the repeater laser). Rather than re-design the electronics they went back to the lab and asked them to purposefully crank down the sensitivity of the lasRs.

    Moral of the story: If IBM really HAD to get that stuff out the door in 18 months they could proabably do so. Chances are, however, that they can't see the long-term financial benefit of doing so.

  • IANAP (I am not a Physicist), but I would hazard a guess that it is impossible to perform a computation without moving a few quanta of matter or energy around, thus there must be a theoretical minimum amount of energy needed to do a single computation. Also a theoretical minimum size and maximum speed.


    And if you keep halving size, you get there in log time, ie sooner than you'd think.


    My completly uninformed opinion is that silicon has a couple of decades left at best, but computation in general has a century or so before it runs up against the minimum scales and maximum efficiencies of matter and light.

  • Well, actually, there will also be 0.15 um, which is what AMD is going to do with the next Athlon shrink. They claim they will be able to do it faster than 0.13 um, so they will be quicker to market. Who really knows ...
  • The next reduction in size will be to .13 micron. Intel is planning to make this transition on the P-III and P-4 chips in about Q3 of 2001. Slashdot posted this [slashdot.org] about the coming chips and micron size reductions. CNET has a story [cnet.com] which is what the slashdot story is about. The CNET story though comes from this story [inqst.com] of InQuest Market Research. Hope you like chip road maps as much as I do :)
  • My bad, you're completely right....I was thinking of virtual 86 mode.

    Email me.
    Don't trust anyone over 90000.
  • yeah, but i figure that's ok, since the government's run by aliens anyway

    now if they could just point that big dish at arecibo at washington, maybe they'd start getting some results...
  • Just because the technology is available doesn't mean it's *readily* available. Look at .18-micron chips. IBM is the only company that can make them on a somewhat consistent basis. They have about a 90% yield when it comes to making them. All other companies have about a 50% yield, if that. So just because a company can produce one working chip, that doesn't mean it will be able to efficiently produce enough to start selling. It could take 10 years for them to come up with a better process of producing them.
  • Actually, Moore as not talking about speed or copmutational power when he said "doubling," he was actually talking about the number of transistors on a chip. And that can't go on forever because transistors can only get so small and no one wants a four square meter 'micro'processor.
  • "When we see stories about quantum leaps in computer technology, why are companies so slow to actually produce, implement, and sell it?"

    Technical matters aside, I guess that'd be like releasing your sophmore album when everyone is still grooving to the first one. I think people have a limit - a measurable one - to how quickly they'll bounce to the Next Best Thing.

    So what I'm saying is there might very well be market disincentives for doing such things. IANAE (I Am Not An Economist) and I can't prove it, just taking a wild swing.

    My .02
    Quux26

  • I certainly did not make those numbers up -- they were exactly what was taught in high-school science classes -- particularly, chemistry, I think. (It was written in the textbook, too.) It was a long time ago, so I obviously can't give a complete bibliographic refference (though it shared the title of "Chemistry" with virtually every other such textbook). I am certain I remeber the numbers correctly, despite the time elapsed, though it is possible that author refferenced them to cm rather tham m, for some cheesy reason (like assuming this would seem small to "kids"); E -6 m = E -4 cm, E -10 m = E -8 cm, so that would work.

    "My" notation, is the "engineering notation" found on most calculators, BTW -- looking at the bottom of the screen reveals that <SUP> is not listed as available HTML codes, so I used this notation rather than exponents.

    Note that I wouldn't have given them if I didn't have reason to think those were accurate. It is interesting, that some other replies managed to be informative, rather than just accusational and insulting.

  • You're right, its the transistors that (should) double not clock speed.

    Thanks for that informative post!
  • by Atomizer ( 25193 ) on Saturday August 12, 2000 @04:47PM (#860081)
    I work for a silicon wafer manufacturer, we supply Intel, AMD, etc with the wafers that they put the chips onto.

    The wafers we supply have and 'Epi' layer on them, which is short for epitaxial. The layer is silicon that is grown on the wafer at a high tempurature (I think 950-1000 degrees C). This makes the wafers less rough, thus smaller lines widths. The wafers are inspected for defects, and the machines that inspect them can only see particles, pits, etc down to .13 micron. The human eye can only see down to .30 microns or so. (That's under ideal conditions, dark room, 200 watt halogen light focused on the wafer.) Needless to say there has been a transition to machine only inspection as the chip lines widths have gotten smaller.

    Intel already annoucing the .13 micron line widths makes me wonder what the yields will be like. The machines that can see down to .13 micron run about 1/2 million or so. I haven't seen what the next generation of them will do, but I know that we don't have them installed in our plant yet. That is definately one reason why it takes so long for this stuff to reach consumers. The clean rooms that the wafers are cleaned and inspected in are filtered down to class 10, which translates to less than 10 particles in the air at .3 microns. We usually have 0-1 at .1 microns. Makes an operating room look positively filthy.

    The wafer manufacturers are mostly breaking even at this point. Intel is making fat cash, but they are getting it from squeezing all the wafer suppliers. Some have dropped out of the business do to the lean conditions. Nobody really has enough money to buy equipment to make these wafers on a large scale, let only finding vendors that have equipment that meets those specs.

    .01 micron, holy crap!
  • outer space maybe, but intelligent life in Washington???

    Think you've been reading too much sci fi...

  • sorry for being rude. I thought you were trolling.
  • When I compile code with Visual C++ it seems to take forever, given a large project. If I use Object Pascal instead, the compilation time drops by 2-3 orders of magnitude

    I have noticed this to. Object Pascal (I used the Borland Delphi 5 compiler) is blazingly fast, much much faster than MS VC++, Borland C++Builder or gcc (though Borland C++Builder seems to be somewhat faster than MS VC++). Even plain C compilation with those compilers is way slower than Object Pascal compilation.

    Someone can give an explanation?

  • A kettle with 10^8 Kelvins should work :).

    (That's what princeton's Plasma Physics Lab can do ).

  • If this is to be a point on the Moore's law curve it will have to be in production in just over 6 years.

    Can't have it both ways. Either it (or something of equivalent density) is out then or Moore's law finally breaks down.
  • Anyone that has read the works of our favorite British geek, big Alan Turing, knows that he stated in the 30s that computational speeds double every 12-18 months. Turing took a look at the "computers" dating back into the 1800s.

    I don't think so. Extrapolating Moore's law backward to 1930's, even with only a factor of 2 per 18 months, gives 1 operation per day. Somehow I think computations went a little faster than that, and certainly faster than 1 operation per 3 millenia in 1900.

    If Alan Turing did indeed propose such a doubling every 12 to 18 months (and nothing in my reading suggests it), then such progress drastically slowed some time in the last 70 years.

  • I'm assuming that IBM would be tailoring this process towards MOS technology. Sure, they can make the channel length in a MOSFET really small, but wouldn't this introduce a lot of problems with the existing MOS model? One of the biggest problems with MOS technology is scaling dimensions down causes rampant power loss (due to leakage currents) and dominating electric field effects that can totally destroy transistor operation. You can't just go from 0.13 micron to 0.01 micron with the same basic structure and expect the same type of operation. Is there a new kind of transistor technology being proposed?
  • The problem with this and all new tech is cost. Right now, AFAIK the real research is going into .07 micron tech (you say .18, but actually .13 is "cutting edge"). Currently .07 is done using a laser etch, as opposed to lithography. For any new tech to become available you have to be able to mass produce it, and do so cheaply. .07 isn't cost effective yet (because of the laser it takes a long time to produce one wafer), and I would guess that .01 is not cost effective either. the next 10-15 years will likely see IBM first perfecting the process, and then scaling it to large scale/mass production. After that you may see things (5-10 years) being created using this process, but I wouldn't expect anything, even larger feature sizes, to be seen before then.
  • From first proof of concept to a commercialy viable product is often very very long. Such as:

    Liquid fuel rockets - 1920's
    Turbojet - mid 1930's
    TV - 1920-something
    High temp superconductor 1992?
    Digital electronic computer - 1945
  • by cybaea ( 79975 ) <`moc.aeabyc' `ta' `enalla'> on Saturday August 12, 2000 @12:54PM (#860091) Homepage Journal

    They haven't even made a chip yet! They have just made a transistor or two.

    It is not even clear from the article how small they have gotten the channels: it only says that the technique "scales to" 10nm.

    There is a long way from showing that a given technology will sort-of-work in a lab to mass-production. That's one of the reasons for the delay.

    Another is that there might not be a market. It is currently quite feasible to get a couple of ~1GHz processors with a few gigs of RAM in a machin we can almost afford. Let's face it: few of us sitting here reading Slashdot are using our quad-Xeon workstations to their fullest. Who would really buy it at, say, 100 times the current price? I've just ordered my dual-PIII and I doubt I could easily use more processor speed. (Memory, maybe.) And I'm sure I wouldn't pay half a million bucks for it!

  • That driving prices down is exactly why they don't just release stuff right away. Companies need to keep thier profits up and releasing this tech wouldn't do much to drive prices down unless IBm could offer an x86 compatible CPU at roughly the same cost, etc.

    Another problem would be IBM would have a proprietary production design and "closed-source" is evil (around here anyway)

    It sucks, but that's capitalism.

  • by Anonymous Coward
    AARRGH! Just how many times this has to be said?!

    Stop running the SETI@home! It's NSA's Echelon client and by running it, you're helping the Big Brother!

  • Just using the word is racist, and it's worse if you are indeed African-American. Everyday people like me are fighting to achieve equal standing with all other races, and here you are making yourself look like a fool. Way to represent bro.
  • by Junks Jerzey ( 54586 ) on Saturday August 12, 2000 @02:05PM (#860095)
    I have one question here: will software really need more and more CPU performance as time goes by?

    I mean, as the article says, sure, servers and stuff will definitely put good use to the increase in performance, but what about good ol' Joe Sixpack using Excel at his office? I mean, besides from cranking SETI@home units faster, is there really such a need for faster processors at home / office?


    In all honestly, I stopped noticing any speed differences around 200MHz or so. I used a 200MHz Pentium running Win NT for a while at work, then I went to a 400MHz Pentium II. Couldn't tell the difference at all.

    It is getting to where rewriting software and/or changing your approach are much more valuable than processor pissing contests. When I compile code with Visual C++ it seems to take forever, given a large project. If I use Object Pascal instead, the compilation time drops by 2-3 orders of magnitude. That's a much bigger win than increasing my machine to a 2GHz processor.
  • by Anonymous Coward
    As circuit density increases, stray radiation from everyday household sources may become a serious source of errors in computers. Radioactive thorium can be found in gas latern mantles and in high quality camera optic lenses. Amercium can be found in smoke detectors. Even all potassium is mildly radioactive (decays into argon, hence the potassium-argon dating method). Everyday carbon-14. We may need to start having lead shielded PC cases. Of course the future Imacs will use lead crystal with various impurities to make translucient colors.
  • my comment here is a bit late, but yes, it was sarcasm on the IBM doesn't want to make intel mad and notice i said "mass market"- theres a market, but the average computer user won't be needing this for a while.
  • by Anonymous Coward
    Well, as an AC with such a background let me say that the article itself has quite a number of errors and misleading statements. But to answer your question about the 'micron' processes.

    In general, the 0.xx micron tends to refer to a technology 'node' as stated on the SIA roadmap and not some real feature on a chip. It's an incredibly misleading term and I have often discussed this in detail with professors and industry folk alike about how this confuses folks. Sometimes companies like IBM might actually post more meaningful numbers like channel length or gate length (Leff or Lpoly) but quite often when this is done it confuses the average reader so most companies tend to instead refer to technologies by their SAI node and not by any true dimension.

    Another horrid inaccruacy in the article was the definition of short-channel effect ("interference between transistors located too close together"). This makes me cringe as it's not even close to the true definition. Short channel effect refers to the difficulty in turning a given transistor off and doesn't normally have any relationship with device isolation. Basically, the fields between the source-drain of the "short channel" make it increasingly difficult to isolate the source from the drain in an off-state. As a result, for ultra small devices it is difficult to keep the device turned off. This has nothing to do with interference from other transistors.
  • Your desktop software will most likely not need this for a LONG time. In addition, it's probably also fair to say that most servers won't need anything like this either.

    And 640K ought to be enough for anyone

  • by taniwha ( 70410 ) on Saturday August 12, 2000 @02:20PM (#860100) Homepage Journal
    Lots of stuff has to come together for a new process to be viable:
    • the FAB equipment is wont be available in commercial quantities to do this new process for a long time
    • the FABs have to decide that they need to include that equipment in their capital expenditure (a chicken and egg problem - no one wants it so we wont spend the money - but we don't offer it yet so no one bets their next chip on the process)
    • it will probably take them years to make a process that will yield in volume
    • the CAD tools aren't here yet (today's are limping just coming up speed with 3d extraction - what happens at 0.01u? I don't know - do they have to include quantum effects? the hall effect? something else that's in the noise today the way RC effects were at 1u?
    • and maybe there aren't enough applications today to make it viable people are used to targetting their next design to next year's process and may not think that far in the future (though somehow I suspect DRAM would be the first obvious application here - at .2u today we have 100Mb DIMMs - at .01u we'l have 4Gb DIMMs .... oops time to move to the 64-bit CPU real soon now - actually I really like the performance jump we'll get by moving the DRAM on with the CPU ....)
  • 2. Intel supplys ibm with processors for thier desktops. IBM wouldn't want to make them mad now, would they?
    Ok, you're kidding, but I doubt that the first line of processors using this V-Groove technique will be used in desktop computers.

    3. Theres no real mass market demand for it at the moment- as the article stated, there are very few things that your "average" computer user would need that would benefit from them releasing this chip. And no, 250frames/sec in quake 3 arena doesn't count.
    There will ALWAYS be market for this, when the technology is ready for it, and people will line up to buy it, even if the prices seem outrageous. Think accurate long term weather prediction, strategic military simulations, and of course the NSA would be delighted if they can finally read your encrypted e-mail.

    This breaktrough is just one of the many, many hurdles that have to be overcome, in order to get a product that can actually be manufatured.

  • from http://linuxtoday.com/stories/296.html:
    "On somewhat of a tangent, there is continuing work to support a subset of the Linux kernel on 8086, 8088, 80186, and 80286 machines. This project will never integrate itself with Linux-proper but will provide an alternative Linux-subset operating system for these machines. "

    I think that aswell as being a 16 bit chip another problem to porting linux to the 8088 was the memory. Was it that the 8088 didn't support protected memory? I forget.

    You are right that this guy almost certainly has never never tried Linux on an 8088 but it's not imposible.

  • Ha! take that, Salmon DNA computer.

    I would worry the salmon chip trying to swim upstream to spawn. The worst we have to feer from the V-Groove is some funkadelic dancing.

    ---

  • Well we arn't in the same domain.
    As Newtons laws are accurate, but only the the domain of speeds not approaching c.
    As Moore's laws are acurate, but as thing have definatly changed since the creation of Moore's law it is almost accurate to say we exist in a different domain. .. Ohhh nevermind..
  • by Leonel ( 52396 ) on Saturday August 12, 2000 @12:26PM (#860105) Homepage
    Maybe with this tech 3dfx can make a voodoo5 small enough to fit inside my computer case.. :)

  • Despite falling under scrutiny from time to time, it [Moore's law] continues to accurately reflect the progression of chip technology.... Moore's Law in recent years has shortened to reflect a doubling of transistor count and performance every 18 months.

    Ahem, a law can not be both accurate and in need of revision within its (original) domain.

  • Your thinking about Quantum computing, not DNA computing, totally different technologies. But anyways even with Quantum your assumtion isn't totally correct.
  • This is yet another "I've always wondered...." question I have for all you Slashdot readers.

    When we see stories about quantum leaps in computer technology, why are companies so slow to actually produce, implement, and sell it?

    I feel releasing this technology now would not only benefit consumers, but help to drive down prices of other technologies. For example, if IBM released a processor built using this process today, I'm confident Intel's CPU price would drop.

    So, what's keeping IBM from releasing hardware based on this technology in 1 to 2 years instead of 10 to 15? Ideas?
  • Sure, the fact that you're paranoid doesn't mean that they're not after you...
  • No. No. No.

    They didn't use electron beams. Where did you get that idea?

    I quote: "V-Groove, in addition to lithography techniques, uses chemicals to create an anisotropic chemical reaction..."

    Don't be in so much of a hurry to post to actually read the article.

    Torrey Hoffman (Azog)
  • .18 to .01 is quite a leap... but could this process be used today, though on a smaller scale? If it's going to take 10 to 15 years for production at .01, how about .10 in a year? Or would it be impractical or inefficient to do that?
  • It's interesting to think that if people weren't always thinking that Moore's law can't stand up to more than another 10 years, we'd need a new law.

    What I mean is that since it takes about 10 years for an emerging technology to go from theory to mass implementation, if there were theories that showed the promise of Moore's Law living on for more than ten years into the future, products based on those theories would emerge faster than Moore's Law predicts.

    Fox's Law: The estimated time that Moore's Law will hold true will always be close to the time it takes to turn the latest theory into a commercial product.

    Kevin Fox
  • > Fox's Law: The estimated time that Moore's Law will hold true will always be close to the time it takes to turn the latest theory into a commercial product.

    First off, Moore's law is not a law as in a law of Nature or physics. it should really be called "Moore's observation.

    Second, there are fundamanetal limits to silicon which will be reached sooner or later. Maybe the limits are further out than we think, but you can't shrink those wires for ever.

    But that doesn't matter. Take a look in Ray Kurtzwiel's badly written but provocative book "the Age of Spritiual Machines". There is a passage in there which lists the processing power, bang for the buck available, back to the 1890s. Moore's 'law' holds, more or less, back all this time.

    Kurtzwiel's commentary on Moore's law is that Moore's law is evidently not a property of silicon, but of the marketplace, and that we have nothing to fear, silicon will be replaced by something else. No doubt that revolution will be slashdotted.

    Typical Californian hyper-optimism, but he may be right in this, at least for the next few hundred years - remember, no matter what the medium, you can't keep doubling the performance indefinitely.

  • Quake Arena!

    In addition, IBM and Intel agree that, especially with faster Internet connections, software will catch up to and exceed the capabilities of today's desktop processors, requiring more performance there as well.

    I have one question here: will software really need more and more CPU performance as time goes by? (Code it again, Sam!)

    I mean, as the article says, sure, servers and stuff will definitely put good use to the increase in performance, but what about good ol' Joe Sixpack using Excel at his office? I mean, besides from cranking SETI@home units faster, is there really such a need for faster processors at home / office?

    Shouldn't other areas of computer science be explored as well? Im sure there's lots of research going on all the time, but if someone were to discover a faster search / compression / whatever algorithm that would make up for a slower processor, wouldnt it?

    As usual, that's my opinion... and as I said, the truth is I'll probably use it to play better, faster and bloodier games on my PC :)
  • by spiral ( 42436 ) on Saturday August 12, 2000 @12:58PM (#860115)
    >Or would it be impractical or inefficient to do that?

    Yes, and yes. At least, for a while.

    By the sounds of the article, they've only managed to create a handful of transistors using this new process. All a transistor consists of is 3 layers of semiconductor with interconnects -- not a particularly complex structure. The next phase will be building a non-trivial circuit. This, no doubt, will require reworking of their technique (read: years of research) to produce an experimental prototype. Then comes tuning to actually make it useful. At this point, they're still basically producing the chips "by hand" -- very expensive and time consuming with a very low yields.

    Once they've proven that the process really does work (assuming, of course, that it does), and that you could conceivably build a real chip with it, they need to design the mass production fabrication hardware. When that's done, they'll actually be able to turn out a few chips, as you said, on a smaller scale -- no doubt still at tremendous cost.

    The last barrier is the infrastructure. The final version of the new process will likely require overhauling one or more existing FABs (or building a new one), again at huge cost, both money and time.

    10 years from single transistor demo to the first production model is actually pretty quick. It's the same story again for other innovations -- be it faster/smaller chips, higher density hard disks, holographic storage, whatever. The more radical the new strategy, the longer it takes to get it right and get it ready.

  • So, what's keeping IBM from releasing hardware based on this technology in 1 to 2 years instead of 10 to 15? Ideas?
    I've generally taken the '10-15 years' to mean 'We can make one, but it wouldn't be profitable to make more'
    IBM might be able to produce one at great cost for a technical demonstration, but doing it on a regular basis might be beyond the abilities of a production line. Anyway in 10-15 years there me be a faster, more efficient method of information transfer that we haven't thought of yet. Moves from .28 micron to .26 micron seem to require new fabrication plants, how much of a shift in production methods would .26 to .01 require?
  • Ah yes - this is indeed true. However, let me clarify (I should have been more specific earlier).

    What has been doubling over the last two hundred years isn't exactly computation in terms of operations/sec. It is actually computation in terms of operations/sec/price. Both Kurzweil and Minsky (and Turing) have written on this. If you trace the amount of computing power that $1,000 buys over the last hundred years or so - you can see that the amount of computation bought per $1k has been doubling every 12-18 months.

    -=|t
  • remember, no matter what the medium, you can't keep doubling the performance indefinitely.

    Just to play devil's advocate: Why can't you?

    What's the actual physical laws that dictate that you can't? As far as I know, there's no equivilant to the laws of thermodynamics pertaining to information theory.

    Kevin Fox
  • Duh.

    The moderation is really crazy nowadays. Bremsstrahlung is a physical process, which with high energy particles like X-rays can scatter the lattice of silicon, resulting in spurious irradiation and damage to components.

  • As CPUs get faster, the things we do with them will be more complex.(Score:5, Insightful)

    Finally, someone in this thread gets it.

  • Last time I checked, a Micron was different from a micrometer. Specifically, a micron was E -4 (0.00001) Meters, 100 micrometers, or 0.1 milimeters. A micrometer was E -6 m, An angstrom was E -8 m, and a nanometer was E -9 m. Thus, 10 nanometers would be precisely one angstrom, but actually, 0.0001 microns.

    Whose in error? Have I just been uninformed all these years, or is this confusion on the poster part, and a mistake in the original report / news release?

  • by Crazy Diamond ( 102014 ) on Saturday August 12, 2000 @02:20PM (#860122)
    Numbers like 0.18um or 130nm are actually associated with.the minimum feature size. In actual chips, the metal widths are almost never drawn at the minimum feature size. The poly layer however can be drawn at the minimum features size and it is what defines the transistor channel length.

    Crosstalk between wires is not just a function of their decreasing spacing because the aspect ratio of wires is also increasing very significantly (anywhere between 50-100%) leading to a larger lateral area. Copper processes allow wires with a smaller aspect ratio but the same resistance per square leading to the decrease in coupling capacitance. Low k dielectrics also are used to decrease the interconnect capacitance.

    The problem that we are currently facing is that transistors are fast enough that the critical paths in a modern chip is almost entirely due to the delays of long global interconnects. There are many things we currently to do speed up these wires including shielding, buffer insertion, and simply more intelligent routing.
  • Current generation fab plants cost about $6B to build with that increasing about 50% per generation of technology, which takes about 18 months. These fab plants currently use ultraviolet light to expose a whole IC in one step. What IBM is proposing will require drawing each feature with electron beams...a very time consuming process when an IC contains 300 million little lines...making an IC built with technology about 300M time more expensive than current UV lithography....suitable for special single transistors and research into quantum effects but Not Ready for Prime Time(tm). Eventually the industry will develop electron beam or x-ray resists to make IC's with this density but Moore's law holds both ways, as a predictor on advancement and as a predictor of limits on what's possible.
  • I worked a few years as a design engineer for a major chip equipment manufacturer. Most of the problems in scaling down has been in improving the lithography process as the wavelength of the light is the limiting factor to how fine a detail you can get. The other problem is as you get small widths, you want to etch deeper down to keep the cross section of the channels the same. To do this, you want something that etches straight down instead of spreading sidewards forming a v-channel instead of a square channel. You do this by using highly energized ions. Such technology has been out of development for 2-3 years now. I don't work in that industry anymore, but generally the technology in development is about 2 'notches' away from the leading edge of production. So .10-.13 micron technology is probably in development right now.
  • 1. this would require a lot of testing/ finessing to make chips on a mass market scale.
    2. Intel supplys ibm with processors for thier desktops. IBM wouldn't want to make them mad now, would they?
    3. Theres no real mass market demand for it at the moment- as the article stated, there are very few things that your "average" computer user would need that would benefit from them releasing this chip. And no, 250frames/sec in quake 3 arena doesn't count.
    4. because of the lack of mass market demand and the high cost they'd incur by implementing the processing methods they'd need to make this chip it would make the chip very expensive. Very expensive chip that few people want right now= something intel won't worry about for a while.
  • Got any links to encoding quality comparisons? Speed isn't everything, you know...

  • The truth -- at least commonly in the sciences -- is that it often takes years of work after the initial discovery of an idea to put it to practical use. Just because we can do something once in a controlled environment doesn't always mean that it can scale to a mass production industry.
  • Let me correct your measurements a little --

    A micron usually refers to a micrometer, although many other people use it for a different measurement. An angstrom is (in your notation)E -10 m. So, 10 angstroms is one nanometer. If you really want to talk small, I suppose you could try picometers, femtometers, or apatometers.

    To give you an idea of scales, Bhor's radius is about .5 Angstroms, and the wavelength of visible light is usually measured in 100s of nanometers.
  • According to Webster's, they're synonomous. http://www.dictionary.com/cgi-bin/dict.pl?term=mic rometer
  • by softsign ( 120322 ) on Saturday August 12, 2000 @02:37PM (#860130)
    It comes from the wavelength of ultraviolet light - which is currently used to trace features onto a silicon wafer. Hence the name "photolithography".

    If you go any smaller, your waves become X-rays and that significantly complicates matters.

    --

  • Groove-V. It has a certain ring to it.
  • The NSA has probably had this technology for years now. And they DO read your encrypted email. I'll bet they're reading this post right now, as I'm typing it.
  • >though it is possible that author refferenced them to cm rather tham m, for some cheesy reason

    Could be. In chem, the units of measurement are relative to centimeters,grams, and seconds as opposed to physics which uses meters, kilograms, seconds.

  • Where did that number comes from?

    The REAL fundamental is Mr Heisenberg's Uncertainty Principle pxh!

  • Assuming a doubling in clock speed every 2 years, in 20 years clock speed will increase by 2^10 ~ 1000. So PERFORMANCE should increase by a factor of 1000.

    Obviously this is going to happen somehow but its nice to see how it might be done.

    "In our days we only had 50GB hard drives and 2GB of RAM..... And we liked it!"
  • 2. Intel supplys ibm with processors for thier desktops. IBM wouldn't want to make them mad now, would they? Actually IBM helps intel, IBM was first to have a working 1gh processor (in the lab). IBM is not into chips (except in some dedicated products) Thier bread and butter is computer sales (servers, cash registers, network software and SERVICES). Manitcor
  • by oh shoot ( 79863 ) on Saturday August 12, 2000 @12:41PM (#860137) Homepage
    With chips like this, we can fear only one thing: the marketing campaign. Intel's was bad enough, even without a name like V-Groove.

    --Jeff
  • by Jay Random Hacker ( 214329 ) on Saturday August 12, 2000 @12:42PM (#860138) Homepage
    10 to 15 years is the time frame for them to get machines that can make these paths on a large (30-60 million transistor) scale, plus the time for them to build enough of said machines to actually be able to produce enough of these chips for people to care (if you can't buy it, then who cares how fast it goes), plus then the time for them to build a plant to house these machines, plus the time (after that) of installing the machines and doing test runs. There are all sorts of stuff that it's easy to do a couple of times, but it gets hard when you're expected to do it a couple billion times.

  • "IBM: V-Groovey or be square"
  • new option should be:- -1 Inciteful.
  • I mean, as the article says, sure, servers and stuff will definitely put good use to the increase in performance, but what about good ol' Joe Sixpack using Excel at his office? I mean, besides from cranking SETI@home units faster, is there really such a need for faster processors at home / office?

    Definitely. It still takes longer to encode my CDs as MP3s than it does to pull them off the CD at 20X :-). Assuming the current rash of technologies hangs around, I think we'll see people sending each other video mails within a few years. And I'll still be looking to have that Linux kernel compile time down below 2 minutes among other things. As CPUs get faster, the things we do with them will be more complex.

    Cheers,

    Toby Haynes

  • You are wrong beyond the bremsstrahlung is a physical process part. Bremsstrahlung is the release of radiation by accelerated charged free particles as in a plasma.
  • With a .01 micron pathway, strange quantum behavior would start taking over. IBM researchers are going to have to find a way arround the quantum effects that would render such a chip useless.
  • What if you turn off optimisations on your C++ compiler? I know that VC++ does a good ammount of stuff toward the end of speeding up its output (it optimizes much better than C++Builder, for instance)
  • Can you imagine a Beowulf cluster of these? Woo!

    --

  • The article does not state they will be using this in 10-15 years in CPUs, but that they will be doing it as soon as the engineers figure out how t oapply the technology. Someone mis-read the text.
    The article claims this technology will put IBM 10-15 years ahead of the Moore's Law curve.
    To quote the article....
    "...will allow the company to stay ahead of the curve of Moore's Law 15 to 20 years in the future..."

    Yes it isn't the best way to express the idea but it's not that hard to understand.

    This is why I don't read SlashDot very often. this is about the 10th time I've seen you mis-report an artice. I'm amazed at how many people respond to them without actually reading the article themselves.
  • If they can make small transistors then you can have your "more memory". Plus then they can make entire pc's that are the size of a cracker and use next to no power to run. Small transistors are good. There is always going to be a market for smaller/faster/better computers regardless of what you think the state of computing is today.
  • A Micron is a Micrometer. 1E-6. An Angstrom is 1E-10. There are 10,000 Angstroms per micron.

    You did manage to get nanometer correct.

    Out of curiosity, did you just make these numbers up, or did you read them somewhere?

    cheese
  • You have the right idea here: Note that the article makes no claims about the minimum feature size...it is not about the technology of patterning circuits on the chip (photolithography).

    The researchers have come up with a technique for creating short channel transistors without drawing them. This is useful for studying the operation of future devices, but will not directly impact the scaling of circuitry (and die sizes).

  • yes, anisotropic etches are possible. They happen all the time...crystal fault delineation, dopant preferential etching.

    I haven't read the article though, so maybe in their context it is wrong...

    cheese
  • Not to mention that there's lots of ancillary stuff that has to be developed such as resists, lens coatings, etc. I'm not even sure that there's going to be such a thing as resists or lenses at this size level. I suspect optical photolithography is going to run out of gas before it gets to these feature sizes. We may be looking at direct electron-beam writing for 10 nm. If that's so, it's going to be one slooow process.
  • >IBM is not into chips (except in some dedicated products)

    IBM isn't into commodity PC hardware chips... by dedicated products I'll assume you mean the bulk of the chips in the S/390, AS/400, RS/6000 and Netfinity lines...

    --
  • Sure. And no one will ever need more than 640k of ram...
  • Thanks for the info. :)

    OT: Does anybody know of any good microprocessor mag/zine/web site like " Microprocessor Report [chipanalyst.com]" that doesn't cost like a billion bucks a month to subscribe to? I'm no engineer or expert, but I love reading this type of stuff (and happen to be broke ;).

    q

  • We've all heard a lot of talk about the "fundamental boundaries of silicon" and how "Moore's law will be destroyed in 15-20 years". While there are definitive boundaries of the material, the idea that computational speeds will taper off in 15-20 years is complete rubbish.

    Most people know Gordon Moore's great "law". What most people don't know is that his profound statement of computation, a) isn't that profound b) isn't originally his idea.

    Anyone that has read the works of our favorite British geek, big Alan Turing, knows that he stated in the 30s that computational speeds double every 12-18 months. Turing took a look at the "computers" dating back into the 1800s. From purely human/mechanical, to entirely mechanical, to electro mechanical, to electrical (Vacuum), to transister, to IC, computational speeds have been doubling since Babbage's girlfriend was writing theoretical software! (ok maybe a little later than that..)

    The point is that at each stage in computing history, when one medium reached its limit, another picked up and continued seemlessly along. So the broader Moore's law (the smaller scale and actual Moore statement was only regarding ICs), will continue, silicon or no silicon. So then the interesting question is what will pick up when silicon dies. Molecular/Nano would probably be most peoples guess right now.

    I'd expect the only failure of "Moore's law" to be that it underestimates the speed in which computing technology will double - my guess is that in 15-20 years it will go even faster than 12-18 months...

    -=|t

  • Out of sheer curiousity, has anybody even found a photoresist that works with x-rays yet?

    Or -- an equally difficult problem -- do we even have a good hard x-ray source?
  • No, actually the 80386 was the first Intel CPU that supported protected mode. Hence Windows 3.1's "386 Enhanced Mode."

    Email me.
    Don't trust anyone over 90000.
  • I have one question here: will software really need more and more CPU performance as time goes by? This is an unbelievably short sighted viewpoint from a Slashdotter! In 30 yrs time I *really* hope my neural tap, eyeball quality rendered (8kx3k, 48bit, 100fps,100Gpoly/sec), speech and vision recognition system doesn't run on a PIII800 or a P8/10k!! It had better use 10k times less power, have 10k times the grunt in 10k times less space and be as far ahead of Pentium anything as todays CPUs are from vacuum tubes. I'd hate to have even a super-duper-duper PC strapped to my head! :-) And that little sucker aint going to be running Windows, Linux or any other OS of today, perhaps a very distant relative. That OS had better be very robust, secure, realtime and distributed, probably something brand new, lending from Unix, Windows and other stuff like QNX and pSOS. UI will be irrelevant. Can any of you honestly say you don't believe this will happen?
  • Man, remember the G-Funk dancing people in clean suits? Can you imagine what they'd do with a marketing phrase like.. V-Groove?

    I can just see it now:

    "A hush falls over the crowd as the lights dim. In awed quiet, the sound of the spotlight being turned on is like a gunshot in the night. Piercing the air, the light flies about through the stuffy atmosphere of the club and finally falls upon a small door in the back of the room.

    [Groovy music starts up in the background, quiet at first and slowly getting louder.]
    The door swings open and.. THE CLEANSUITS ARE BACK!
    [The music is going full blast with George Clinton singing some funky tune.]
    Ba je bank bank buhdank nank kedank!

    [Fades to white and the Intel Logo fades in. An announcer with a deep, sauve voice sounds off;]

    "Intel present the new line of chip, the Funktaniums, using the new V-Groove process. Get one today and get laid tonight."

    --

    Man, I have to lay off of the Saturday Night Fever.

    Rami
    --
  • Your desktop software will most likely not need this for a LONG time. In addition, it's probably also fair to say that most servers won't need anything like this either.

    Where we could use this kind of technology today is in the high end routers driving the Internet. As present research is investing heavily into TeraByte routers, even these will be heavily taxed for speed in the coming years. You can bet that folks like Lucent and Cisco are looking very closely at this kind of speed requirement.
  • Well, as far as I have read in most computer magazines, the Mhz is higher in the AMD and Inel chips, the G4 optimizes on it more efficiently.

    I'm not sure how accurate that data is, but I am sure I have read it multiple times, probably on /. in fact.

    By the time Intel starts to manufacture these, I'm sure all of the othe rprocesor companies will be too. The Macs will inevitably benefit form this technology also, maybe you'll just have to wait a quarter though.
  • No, the 80286 supported protected mode, but the 80386 is so much of an improvement that nobody codes for the 80286. The 80286 had a memory limit of 16 Meg. Transition from protected to real mode via an aborted reset. The 386 also introduced the virtual 86 mode.
  • E-beam lithography is what they used to create the mask. They they used anisotropic etching to further control channel size to less than lithographic feature size (a very common technique). Read up on the technology a bit. Maybe you're thinking of Ion Implant technology where the process actually implants the doping atoms without lithography?

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...