Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Moore's Law Staying Strong Through 30nm 199

jeffsenter writes "The NYTimes has the story on IBM with JSR Micro advancing photolithograhy research to allow 30nm chips. Good news for Intel, AMD, Moore's Law and overclockers. The IBM researchers' technology advance allows for the same deep ultraviolet rays used to make chips today to be used at 30nm. Intel's newest CPUs are manufactured at 65nm and present technology tapped out soon after that. This buys Moore's Law a few more years."
This discussion has been archived. No new comments can be posted.

Moore's Law Staying Strong Through 30nm

Comments Filter:
  • on the BUSS (Score:5, Interesting)

    by opencity ( 582224 ) on Monday February 20, 2006 @10:31AM (#14760735) Homepage
    At what point does BUSS technology break down? Figured this was where to ask.
  • by bronney ( 638318 ) on Monday February 20, 2006 @10:31AM (#14760741) Homepage
    I am too lazy to learn these things from scratch but would anyone cared to tell us what's the theoretical minimum width we can go before eletrons starts jumping wires? I hope it's not 5nm.
  • by Anonymous Coward on Monday February 20, 2006 @10:37AM (#14760774)
    Why not good news for IBM? Lets look at what IBM and Intel are really doing now with processors. Power.org is a group formed by IBM is that the Power ISA and such are more standardized between companies and the companies involved with Power.org actually has some say in the next generation processors from IBM (and Freescale, yes Freescale is again in relationship with IBM unlike Apple). Now look at Intel, nothing, no input, nobody can just go up to Intel and ask for feature XYZ without first going through a lot of lawyers. Even AMD's ISA is not the same as Intel's and it is getting worse.

    IBM invented this, so IBM should be able to use it more than Intel ever can. I will note that Apple left IBM/Freescale to goto Intel but there are more than just processors reasons why they left. Apple did not from the looks of it feel that IBM was going in the right direction though IBM is going to be better off without Apple involved because Apple has ego issues.
  • by Anonymous Coward on Monday February 20, 2006 @10:38AM (#14760782)
    Is this really good news for overclockers? Some overclocked P4s have already failed due to electron migration. Isn't a shift down to 30nm just going to exacerbate the problem?
  • by QuantumFTL ( 197300 ) * on Monday February 20, 2006 @10:39AM (#14760784)
    I believe Moore's Law (or, rather, the modified version about processor speed rather than transitor count) will transition to a new regime soon - that of "average" exponential improvement in the form of a punctuated near-equilibrium.

    I believe that the chip industry will have to shift paradigms as the limit of a technology approaches and during these shifts there will be a period of relative nonimprovement as new techniques are refined, implemented, and large scale facilities are built.

    There's so many promising technologies on the horizon (photonic computing, three dimensional "chips," quantum computation) etc, but the transition to each will be very bumpy, not at all smooth like the last 40 years of refining two-dimensional semiconductors.

    As times change, what we know as Moore's law will change with it. It's likely that the "average" improvement will continue to follow the law more or less (considering that it is driven more heavily by economics than technology). Computers will continue to get faster, cheaper, and able to do things we wouldn't have thought we needed to do before.
  • by PoconoPCDoctor ( 912001 ) <jpclyons@gmail.com> on Monday February 20, 2006 @10:44AM (#14760811) Homepage Journal

    While the smallest chunk of silicon we could lay down would be one atom of it, there are things far smaller. In fact you can go something like 26 more levels of magnitude smaller before you start reaching the feasable limit of measurable existance. And yes, subatomic particles could theoretically be used in processors.

    The process designation refers to the the distance between the source and drain in the FETs (transistors) on a processor. Keep in mind that this distance is by no means the smallest thing in the processor - the actual gate oxide layer is tiny by comparison, with Intel's 65nm process having only 1.2nm of the stuff. That's less than 11 atoms thick.

    Found this on a thread at bit-tech.net forums. [bit-tech.net]

  • Well, NO. (Score:2, Interesting)

    by Ancient_Hacker ( 751168 ) on Monday February 20, 2006 @10:49AM (#14760848)
    Just being able to make thinner lines is not that huge a deal.

    There's several large cans of whup-ass that have to be overcome before you can make IC's that much smaller:

    • Lines are 2-D thingies, but conductors are 3-D. Your etching technology has to get X times better to keep up with the line-drawing technology.
    • Same thing with the active components. If you try making the transistor half the old linear dimensions, you have 1/8th the volume of active silicon. This leads to all kinds of problems with leakage and power handling capability.
    • A line that's half as wide and half as thick has four times the resistance per unit length, and 1/4 the current-carrying capacity. You can try using a better conductor, but once you get to using copper, you're done.

    And the programmers will just soak up all your extra speed by turning up the "ooooh" factor (See: Vista).

  • by QuantumFTL ( 197300 ) * on Monday February 20, 2006 @11:31AM (#14761093)
    If you are simply talking about Moore's Law in terms of processing power, there are other places to gain improvements rather than just compactness of chips. There is also parallel processing technology, which is still steadily improving.

    There are many important algorithmic problems that are inherently serial. Some things are mathematically impossible to parallelize. Also limitations caused by enforcing cache coherency, communications interconnects, and resource access synchronization/serialization create bottlenecks in parallel systems. The astrophysics simulation code that I paralellized is almost entirely math operations on large arrays (PDE solving), however there are diminishing returns past 48 processors due to communications latency. Better programming techniques can push the limit of this, however it is difficult to design software that mitigates the effects of this kind of latency without many man-hours spent to handle it.

    Then, far off over the horizon, there's the possibility of quantum computing, which would make for a rediculously huge surge in processing power all at once.

    I mentioned this in my post, however there is a bit of a catch. Quantum computing, practically speaking, is only useful for certain problems - problems that are "embarassingly parallel." QC does not help with fundamentally serial problems, and is likely to be impractical beyond a critical number of qubits, due to quantum incoherency, even quantum error correction can only stretch so far. Great for cryptography/number theoretic operations, and probably many optimization problems (scheduling perhaps?) but certainly not for standard computation. Problems (like database queries) that require large amounts of data to be stored in a quantum coherent fashion are unlikely to be practical.

    "That's fundamentally how Moore's Law works: as soon as the current paradigm starts to get maxed out, we simply shift to another paradigm."

    Ahh, but that's just it - there is a cost to the switch in terms of both time and money. What I am saying is that yes, we can continue to change paradigms whenever we hit a limit, however these transitions will be very expensive and will cause "delays" during which little improvment on shipping computer technology will be seen.

  • by koroviev (begemot) ( 924304 ) on Monday February 20, 2006 @12:02PM (#14761296) Journal
    Actually the more interesting thing about Moore's law (in terms of total processing power) is that it holds way further back than most people think. Mechanical calulator's total numbers and performance (like Charles Babage's difference engine) were also in accordance with Moore's law, and the two curves fit together quite nicely with the advent of the "many women" approach to computing and electronical computers. Even clock-making reflects Moore's law in the last hundreds of years - in terms of unit numbers, clock sizes, element sizes (the size of the gears), the switch to electronic watches, etc..
  • by RhettLivingston ( 544140 ) on Monday February 20, 2006 @12:19PM (#14761411) Journal

    Sure, its not a law. But...

    I'm not sure that it is so clear that the limit will truly be reached before a processor capable of performing as if it had a transistor for every atom of the earth is created. Assuming we're still around, I believe we'll be able to maintain the increases in speed and scale predicted by Moore's law through means we can only just imagine now.

    Certainly, it is starting to appear that we'll see combinations of quantum and other processing. There was also recently a development in tri-state per bit quantum storage that may be extendable to n-state per bit. Perhaps we'll find ways to put subatomic particles together into things other than atoms that don't even require atoms as a trapping mechanism and be able to fully exploit that scale. We could explore processing in ways where a single "transistor" or whatever happens to be the smallest scale component participates in different ways in multiple operations or memories like neurons already do. Technologies for processing that don't generate anywhere near as much waste heat are appearing (magnetic for instance) thus allowing the full exploitation of the third dimension to look more plausible without hitting heat dissapation barriers (solid cubes instead of layered wafers). And what about other dimensions? At the atomic scales we're reaching, it is much more believable that we'll eventually be able to exploit some physical phenomenon to put some of the processing or storage mechanisms into non-temporospatial dimensions.

    Anyway, I believe it to be very unimaginative to say that Moore's Law will ever hit a barrier. I would call it a virtual law. Sure, its not a "law" as in a law of physics. It isn't a theory either. Rather, its a good guess at a rate of development that we can sustain.

    I personally believe that the law is going to change in a few more years as computers reach a level of sophistication necessary to directly participate in more of the scientific research necessary to bootstrap the next generation, gradually eliminating the man in the loop unless we find ways to start scaling the brain's capabilities. At that point, we may start to see the 18 months per generation become one of the variables of the law that is scaling down toward 0.

  • by RicktheBrick ( 588466 ) on Monday February 20, 2006 @12:57PM (#14761732)
    My question is this "What are we getting out of these faster and more powerful computers?". There have been several new super computers brought online in the last few year. Many of these are capable of doing over 50 trillion calculations a second. Yet when President Bush asks this country to develop technology to reduce our need for foreign oil, he says we need 20 years to do so. On a personal note I want a small bed that I can totally shut off the rest of the world. I want it to be totally sound proof and be the same temperature and humidity all the time. In order to do this the bed must be small so that I do not have to spend too much money to maintain it and I must have a computer that will take care of the rest of the house so that I will still be alive the next morning and will find the rest of my house still intact. In order to do this the computer will have to be able to listen to all the noises in the house and determine if that noise will need my assistance to either correct it or flee from it. I want a computer that I can trust my life with.
  • by DancesWithBlowTorch ( 809750 ) on Monday February 20, 2006 @01:13PM (#14761850)
    In fact you can go something like 26 more levels of magnitude smaller before you start reaching the feasable limit of measurable existance. And yes, subatomic particles could theoretically be used in processors.
    IANAProcessor Designer, but from what I've learned in undergraduate quantum mechanics, the problem is not the "limit of measurable existance" (I assume you are referring to the Planck Length here) but Heisenberg's uncertainty principle:

    The Electrons in your transistors are "blurry". When the walls of their potential wells (i.e. the width of the wires) get to low, they will start to tunnel between them in a number that is inacceptable for the operation of a logical circuit. Note that tunneling probability is proportional to something like e to minus the potential well height, so there is no critical limit, rather a smooth transition from "no problem" to "show-stopper".

    So the real question here, which is left to the audience, is at what width do we get a real problem with tunneling currents. (I assume that on contemporary CPUs, the effect is already measurable, yet correctable).
  • Re:Why small? (Score:5, Interesting)

    by necro81 ( 917438 ) on Monday February 20, 2006 @01:24PM (#14761937) Journal
    There are several reasons why the industry is focused on smaller. I do not work for a semiconductor manufacturer, so some of my information may be a little off.

    1) Defects and Yield. Most processors are manufactuered out of silicon wafers 300 mm in diameter. The wafer is very pure silicon (before they start doping it), and the crystal structure is one of the most perfect and regular that humankind has ever been able to produce (at least on a large scale). The industry doesn't do this merely to be perfectionist - it costs a LOT of money and infrastructure to do it - but simply because defects in the crystal structure and silicon purity result in a non-functional chips. The statistics and probabilities behind how many defects get scattered on a wafer, and how many potentially useful chips do those defects knock out has been heavily studied by the industry. The yield that one gets from a single wafer that has many chips on it is a function of defect density and chip size (and other things). A larger chip naturally has a greater chance of having a defect than a smaller chip. There isn't much more that the industry can do to reduce the number of defects on a wafer. In order to increase yield, one of the things the industry banks on is decreasing the chip size. The yield for, say, op-amps (which are very tiny chips) is much higher than for full-blown processors.

    2) Signal Distance. The upper limit of speed for an electronic signal in a chip is the speed of light. That's really fast, but not infinite. In fact, compared to the clock speed of the chip itself, the speed of light becomes significant. The speed of light in a vaccum is 3 * 10^8 m/s. In one nanosecond, light travels 30 cm. For a 4 GHz processor, light can travel only 7.5 cm between clock cycles. In truth, the electronic signals in the chip travel slower than that. So, the distance between various parts of the chip become significant. For a chip as large as several inches, it can take quite a long time, many clock cycles, for bits to make it from one end to the other. Wasted clock cycles = reduced performance. So, in order to continue increasing performance, the industry has worked very hard to keep the size of processor chip very small, so that it takes very little time for signals to travel across it.

    3) Power. It would take a while to explain the physical reasons behind it (see an VLSI or semiconductor textbook for a full analysis), but the operating voltage of a transistor goes down as its physical size goes down. It used to be that 5 V was the working voltage of most all transistors. Then it moved to 3.3 V. Nowadays, the core voltage of most processors is around 1 V. As the operating voltage has decreased, so too has the power dissipation per transistor. The deceasing feature size of transistors and photolithographic techniques is largely to thank for this. The reason that processors now dissipate such a large amount of heat is that, even though the per transistor power has decreased, the number of transistors in the chip has increased more rapidly. If one tried to make a P4 chip using 350 nm techniques (which used to be the standard feature size les than a decade ago), the chip probably would dissipate many hundreds of Watts.

    4) Speed. One would again have to check out a VLSI textbook for a full explanation, but (physically) smaller transistors can switch states faster than large ones. While clock speed is far from the be-all, end-all measure of processor performance, it is generally true that faster transistors result in faster performance (hence the whole notion of overclocking). Using the szame "P4 made using 350 nm technology" example, it would be impossible to run such a chip at anything close to 4 GHz. In fact, I doubt you'd be able to get it to run at even 1 GHz - the transistors would simply be too slow. I don't recall exactly when 350 nm was the standard technology used by the industry, but I imagine that you'd find it coincided roughly to the times when chip speeds were mea

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...