Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Moore's Law Staying Strong Through 30nm 199

jeffsenter writes "The NYTimes has the story on IBM with JSR Micro advancing photolithograhy research to allow 30nm chips. Good news for Intel, AMD, Moore's Law and overclockers. The IBM researchers' technology advance allows for the same deep ultraviolet rays used to make chips today to be used at 30nm. Intel's newest CPUs are manufactured at 65nm and present technology tapped out soon after that. This buys Moore's Law a few more years."
This discussion has been archived. No new comments can be posted.

Moore's Law Staying Strong Through 30nm

Comments Filter:
  • on the BUSS (Score:5, Interesting)

    by opencity ( 582224 ) on Monday February 20, 2006 @10:31AM (#14760735) Homepage
    At what point does BUSS technology break down? Figured this was where to ask.
    • Re:on the BUSS (Score:4, Informative)

      by RabidMoose ( 746680 ) on Monday February 20, 2006 @11:30AM (#14761087) Homepage
      To whoever modded the parent redundant. It was the first post. How is that possible?
    • On copper, the speed is probably limited to somewhere in the 10 GHz range even for short drops, much lower if you have a parallel bus. But given serial interfaces like HyperTransport, you can just keep adding lanes (to a reasonable limit) to increase bandwidth.

      When we reach the limit of copper serial busses, we branch out to optical serial busses, which have the potential to run as fast as hundreds of GHz. Probably won't see these for at least a decade.
  • by bronney ( 638318 ) on Monday February 20, 2006 @10:31AM (#14760741) Homepage
    I am too lazy to learn these things from scratch but would anyone cared to tell us what's the theoretical minimum width we can go before eletrons starts jumping wires? I hope it's not 5nm.
    • by PoconoPCDoctor ( 912001 ) <jpclyons@gmail.com> on Monday February 20, 2006 @10:44AM (#14760811) Homepage Journal

      While the smallest chunk of silicon we could lay down would be one atom of it, there are things far smaller. In fact you can go something like 26 more levels of magnitude smaller before you start reaching the feasable limit of measurable existance. And yes, subatomic particles could theoretically be used in processors.

      The process designation refers to the the distance between the source and drain in the FETs (transistors) on a processor. Keep in mind that this distance is by no means the smallest thing in the processor - the actual gate oxide layer is tiny by comparison, with Intel's 65nm process having only 1.2nm of the stuff. That's less than 11 atoms thick.

      Found this on a thread at bit-tech.net forums. [bit-tech.net]

      • The process designation refers to the the distance between the source and drain in the FETs (transistors) on a processor. Keep in mind that this distance is by no means the smallest thing in the processor - the actual gate oxide layer is tiny by comparison, with Intel's 65nm process having only 1.2nm of the stuff. That's less than 11 atoms thick.

        Not quite correct - the process designator refers to the size of a minimum-width metal line, the physical gate length is usually significantly smaller

        130nm c
      • In fact you can go something like 26 more levels of magnitude smaller before you start reaching the feasable limit of measurable existance. And yes, subatomic particles could theoretically be used in processors.

        IANAProcessor Designer, but from what I've learned in undergraduate quantum mechanics, the problem is not the "limit of measurable existance" (I assume you are referring to the Planck Length here) but Heisenberg's uncertainty principle:

        The Electrons in your transistors are "blurry". When the

    • I am too lazy to learn these things from scratch but would anyone cared to tell us what's the theoretical minimum width we can go before eletrons starts jumping wires? I hope it's not 5nm.

      The theoretical minimum width for the current type of transistors is in the .2nm range, the width of a single atom. Tunneling and other quantum effects will very likely prevent us from ever getting that low, however.
    • Electrons may not be jumping wires, but they're already starting to jump across
      the gate dielectric. This is only a few (dozen or two) atomic layers in thickness already.

      Lithography is not the only problem that must be solved in order to improve
      the density of the chips. There are problems involving the gate oxide, the
      dielectric of the insulator, routing, leakage currents, and interconnect
      capacitance issues.

      The chips may get more dense, but the individual gates may no longer be getting
      faster. Getting faste

  • by Aslan72 ( 647654 ) <psjuvin@i l s t u . e du> on Monday February 20, 2006 @10:34AM (#14760748)
    "This buys Moore's Law a few more years."

    I've heard that more than a few times.Isn't that why it's a law? It seems like every 18 months or so, Moore ends up almost petering out (kind of like apple...) and there ends up being a redeeming breakthrough that keeps it around.

    If it wasn't a law, we'd just call it Moore's hypothesis, or Moore's pittiful attempt at justifying an upgrade. I remember the day when 50Mhz was the theoretical limit for speed and then they got the grand idea of putting a heat sink on the chip.

    --pete
    • by Waffle Iron ( 339739 ) on Monday February 20, 2006 @10:48AM (#14760844)
      Isn't that why it's a law?

      It's not a law. It's just incorrectly called a law.

      It should be plainly obvious that any exponentially increasing phenomenon can't be a "law". If this so-called law were to continue unabated for a couple of centuries, the number of transistors in a chip would exceed the number of atoms on planet earth. Clearly, a limit is going to be reached well before that happens.

      • by OverlordQ ( 264228 ) on Monday February 20, 2006 @10:51AM (#14760864) Journal
        yes, by then we will all have Nural Net Processahs.
      • It should be plainly obvious that any exponentially increasing phenomenon can't be a "law". If this so-called law were to continue unabated for a couple of centuries, the number of transistors in a chip would exceed the number of atoms on planet earth. Clearly, a limit is going to be reached well before that happens.

        Unless you use something smaller than atoms...
      • by RhettLivingston ( 544140 ) on Monday February 20, 2006 @12:19PM (#14761411) Journal

        Sure, its not a law. But...

        I'm not sure that it is so clear that the limit will truly be reached before a processor capable of performing as if it had a transistor for every atom of the earth is created. Assuming we're still around, I believe we'll be able to maintain the increases in speed and scale predicted by Moore's law through means we can only just imagine now.

        Certainly, it is starting to appear that we'll see combinations of quantum and other processing. There was also recently a development in tri-state per bit quantum storage that may be extendable to n-state per bit. Perhaps we'll find ways to put subatomic particles together into things other than atoms that don't even require atoms as a trapping mechanism and be able to fully exploit that scale. We could explore processing in ways where a single "transistor" or whatever happens to be the smallest scale component participates in different ways in multiple operations or memories like neurons already do. Technologies for processing that don't generate anywhere near as much waste heat are appearing (magnetic for instance) thus allowing the full exploitation of the third dimension to look more plausible without hitting heat dissapation barriers (solid cubes instead of layered wafers). And what about other dimensions? At the atomic scales we're reaching, it is much more believable that we'll eventually be able to exploit some physical phenomenon to put some of the processing or storage mechanisms into non-temporospatial dimensions.

        Anyway, I believe it to be very unimaginative to say that Moore's Law will ever hit a barrier. I would call it a virtual law. Sure, its not a "law" as in a law of physics. It isn't a theory either. Rather, its a good guess at a rate of development that we can sustain.

        I personally believe that the law is going to change in a few more years as computers reach a level of sophistication necessary to directly participate in more of the scientific research necessary to bootstrap the next generation, gradually eliminating the man in the loop unless we find ways to start scaling the brain's capabilities. At that point, we may start to see the 18 months per generation become one of the variables of the law that is scaling down toward 0.

        • My question is this "What are we getting out of these faster and more powerful computers?". There have been several new super computers brought online in the last few year. Many of these are capable of doing over 50 trillion calculations a second. Yet when President Bush asks this country to develop technology to reduce our need for foreign oil, he says we need 20 years to do so. On a personal note I want a small bed that I can totally shut off the rest of the world. I want it to be totally sound proo
          • First, be careful what you wish for. You'd better add some sensory data to that sleep environment you claim to want. Sensory deprivation is an extremely effective means of driving someone insane. You may be a vegetable before you get through the first night :-)

            Second, be careful what you wish for. It seems that you're wishing for a computer with extensive capabilities of gathering data about you and your environment, the capability of making complex judgements concerning whether your safety or a probab

      • It is a law:

        1 a (1) : a binding custom or practice of a community : a rule of conduct or action prescribed or formally recognized as binding or enforced by a controlling authority

        That's the very first definition of law from m-w.

        So in particular, the controlling authority (Moore) has decreed that transistor density shall double at such and such a rate, and the industry has obeyed this rule of conduct.
      • If this so-called law were to continue unabated for a couple of centuries, the number of transistors in a chip would exceed the number of atoms on planet earth. Clearly, a limit is going to be reached well before that happens.

        I'd can't seem to find the source right now (its burried someone on wiki about human over population) but...

        If the rate of human births to that of the population remains the same, there will be more humans than (guestimated) number of atoms of the universe in about 17,000 years.

        Now exp
      • It's not a law. It's just incorrectly called a law.

        Right!

        It should be plainly obvious that any exponentially increasing phenomenon can't be a "law".

        Wrong!

        For those that don't know, Moore's "law" says something like, "Every 18 months, humans (hopefully those that work for Intel) will figure out how to double the existing transistor count in a CPU".

        Anybody that can put an open ended and exponentially increasing assumption on human behavior _AND_ call it a law should be in marketing, not science.

        Now, regarding
      • >It's not a law. It's just incorrectly called a law.

        Writing with precision is good. Exponential growth of transistor counts is not a statute and it's not a physical "law" (itself a questionable turn of phrase). It is sloppy to say "Moore's Law".

        We could call it a "rule of thumb" or a "good guess", but those are inadequate terms for an observation that has held true for 30 years and 6 orders of magnitude.

        Moore's Insight? Moore's Prophecy? Moore's Unexpected But Consistent Regularity In Industrial-Economic
      • They always say stuff like this. The old systems were limited because you could never have more than x number of vacuum tubes. Then it went to chips, and there was always some reason why the chip couldn't get any bigger or denser or hotter...and there always turned out to be another way.

        So now we can't have more gates than there are atoms...but what if we're using subatomic particles, so that one atom's worth of particles can comprise multiple gates? What if we find some way to move beyond gates, so that we
      • If this so-called law were to continue unabated for a couple of centuries, the number of transistors in a chip would exceed the number of atoms on planet earth.

        Quite a few very intelligent people think that's exactly what will happen [singularity.org], possibly within our (extended) lifetimes.

    • "Moore's law" is not a law of nature, it is a marketing strategy of Intel.
      • You sir, were moderated unjustly. If you read Moore's original paper, he's speaking directly about marketing.

        (And while Moore's Law is phrased as an engineering challenge, Intel has historically used it as a form of "planned obsolecence" to drive demand for new CPUs.)
    • ...for the same reasons we call it Murphy's law. The world would be a pretty terrible place if absolutely everything that could ever possibly go wrong, did. In both cases it's just a perception that things behave in a law-like manner even though there's obviously no scientific basis and with plenty of counterexamples. As far as technology predictions goes, it is disturbingly accurate, it follows a mathematical formula as most laws do... so we call it a law. It's a joke, laugh.

      And the rub of it is exactly wh
    • Well Moore's law is almost petering out again. Some serious difficulties appear to be cropping their heads once you go below about 30nm, not that there aren't substantial difficulties already.

      Interconnect capacitance is starting to be a real killer. As transistor sizes shrink, their capacity to source & sink current drops a bit. Even with using copper for the interconnect layers, because the cross section of these wires is so small the resistance is non negligible. What this all means is that the t

    • So it buys us a few more years...I'm just wondering what's been happening the last 2 or 3 - it seems we've been stuck with the 3 GHz processors for a little too long now.
  • by merced317 ( 617353 ) <cjg9411&rit,edu> on Monday February 20, 2006 @10:36AM (#14760766)
    since RIT has been doing 26nm. http://www.physorg.com/news10755.html [physorg.com]
    • TFA doesn't say, but perhaps IBM are developing the industrial process (which will usually come after the initial research that says it's possible, which is probably what RIT were doing). It's one thing to show something is possible in a lab, and another thing to develop a process to do it on a large scale.
    • Your link doesn't actually explain what EWL is, but it's probably a reasonable assumption to assume that it won't be very compatible with 193nm (light wavelength) litho equipment.

      IBM's annoucement has a lot to do with stretching the usefulness of existing litho equipment and materials down to nodes that it was never expected to reach. This has been done again (65nm) and again (45nm) from what was once expected. IBM is saying, add water, and we'll do it again (30nm).

  • by QuantumFTL ( 197300 ) * on Monday February 20, 2006 @10:39AM (#14760784)
    I believe Moore's Law (or, rather, the modified version about processor speed rather than transitor count) will transition to a new regime soon - that of "average" exponential improvement in the form of a punctuated near-equilibrium.

    I believe that the chip industry will have to shift paradigms as the limit of a technology approaches and during these shifts there will be a period of relative nonimprovement as new techniques are refined, implemented, and large scale facilities are built.

    There's so many promising technologies on the horizon (photonic computing, three dimensional "chips," quantum computation) etc, but the transition to each will be very bumpy, not at all smooth like the last 40 years of refining two-dimensional semiconductors.

    As times change, what we know as Moore's law will change with it. It's likely that the "average" improvement will continue to follow the law more or less (considering that it is driven more heavily by economics than technology). Computers will continue to get faster, cheaper, and able to do things we wouldn't have thought we needed to do before.
    • If you are simply talking about Moore's Law in terms of processing power, there are other places to gain improvements rather than just compactness of chips. There is also parallel processing technology, which is still steadily improving.

      Then, far off over the horizon, there's the possibility of quantum computing, which would make for a rediculously huge surge in processing power all at once.

      That's fundamentally how Moore's Law works: as soon as the current paradigm starts to get maxed out, we simply shift
      • by QuantumFTL ( 197300 ) * on Monday February 20, 2006 @11:31AM (#14761093)
        If you are simply talking about Moore's Law in terms of processing power, there are other places to gain improvements rather than just compactness of chips. There is also parallel processing technology, which is still steadily improving.

        There are many important algorithmic problems that are inherently serial. Some things are mathematically impossible to parallelize. Also limitations caused by enforcing cache coherency, communications interconnects, and resource access synchronization/serialization create bottlenecks in parallel systems. The astrophysics simulation code that I paralellized is almost entirely math operations on large arrays (PDE solving), however there are diminishing returns past 48 processors due to communications latency. Better programming techniques can push the limit of this, however it is difficult to design software that mitigates the effects of this kind of latency without many man-hours spent to handle it.

        Then, far off over the horizon, there's the possibility of quantum computing, which would make for a rediculously huge surge in processing power all at once.

        I mentioned this in my post, however there is a bit of a catch. Quantum computing, practically speaking, is only useful for certain problems - problems that are "embarassingly parallel." QC does not help with fundamentally serial problems, and is likely to be impractical beyond a critical number of qubits, due to quantum incoherency, even quantum error correction can only stretch so far. Great for cryptography/number theoretic operations, and probably many optimization problems (scheduling perhaps?) but certainly not for standard computation. Problems (like database queries) that require large amounts of data to be stored in a quantum coherent fashion are unlikely to be practical.

        "That's fundamentally how Moore's Law works: as soon as the current paradigm starts to get maxed out, we simply shift to another paradigm."

        Ahh, but that's just it - there is a cost to the switch in terms of both time and money. What I am saying is that yes, we can continue to change paradigms whenever we hit a limit, however these transitions will be very expensive and will cause "delays" during which little improvment on shipping computer technology will be seen.

        • Yes, there are limits as to how far things can be paralellized. However for most common uses of computers we are far from those limits - even many of the languages that are commonly used don't support threading as a standard feature, or the support is not robust. How many languages support loop parallelization as a standard optimization?

          Progress is being made though - for example computing languages such as Java have been adding support for atomic variables and other faciliites that reduce or eliminate the
        • Thank you! Very insightful post.

          And you can quote me.

          Now, on to some observations. We have been at a state of equilibrium now for a few years.

          It is slightly difficult to determine exactly what the bounds are, because we are in it right now. I am guessing that the "slowdown" started around the time of the Pentium Pro ('96?).

          The "clue" was the introduction of "Beowulf" clusters where processing is balanced with communications overhead.

          Intel is fighting this with Itanium, SUN with Niagra.

          I suspect that the new
          • It doesn't work like that. When the new round of more efficient CPU chips arrive, the technology used to create Beowulf clusters will just be used to cluster THEM, vastly increasing the speeds. It will, however, limit the use of clusters, as single chip computers will necessarily be simpler, and so as Beowulf clusters move higher up the chain, they'll lose more of the low end.

            There will always be a need to cluster computers in a high latency manner to deal with intractable problems...and only quantum comp
        • Some things are mathematically impossible to parallelize. Also limitations caused by enforcing cache coherency, communications interconnects, and resource access synchronization/serialization create bottlenecks in parallel systems.

          Explain the human mind, then.
          • Explain the human mind, then.


            Simple. The amazing things that the human brain is capable of doing are parallelizable. Things like recognizing the shape of letters or phonemes in speech are definitely parallelizable tasks.

            Try doing something that isn't parallelizable, like modular exponentiation of a 2048-bit number, in the human brain. It goes very slowly.

            Melissa
          • In case you hadn't noticed, the human mind is quite clumsy at numerous kinds of things. Just compare your arithmetic with that of a computer. You probably have more processor cycles/second than any computer yet available, but because you, essentially, only can use them in parallel there is a large range of problems at which computers are faster...but not infinitely faster. And we're looking ahead (one of our parallel skills) and seeing a wall.

            The human mind evolved (largely) to recognize patterns in 2.5
  • by Captain Zep ( 908554 ) on Monday February 20, 2006 @10:39AM (#14760785)
    Unfortunately most of the extra processing speed this gets you will be sucked up the all the DRM software running sefl-checks on itself, calling the mothership, and triple checking that you are licensed to excecute the next instruction.

    So your computer will be nice and fast, just not any of your applications...

    Z.

    • Well, why that might be true for some, I've not yet seen any DRM software coming from the OSS camp. You all run DRM enabled AIM 6.6.6, I'll sit over here nice an happy running my gAIM 7.0 on my 23 teraherts AMD Zues 5400k+ with my 1.2 jigawatt powersuply. It'll run nice and fast.

      Oh, and that's not to mention linux not having DRM. And before you tell me that I won't be able to play my DVD's, or mp3's, or whatever, I'll point out OggVorbis for audio files (no DRM in that, nor will there be) and I'll also po

  • Moore is Less (Score:3, Insightful)

    by dcw3 ( 649211 ) on Monday February 20, 2006 @10:45AM (#14760821) Journal
    I've heard the predictions for the end of Moore's Law, but haven't paid attention to the reasoning behind them. Is there some (sub)atomic barrier that is supposed to cause this? I was curious if further technological breakthroughs wouldn't prove these predictions incorrect. What would the predictions have been 20 or 30 years ago for our current tech? I doubt few, if any, were able to guess correctly.
    • http://www.hal-pc.org/journal/03feb/column/baby/b a by.html [hal-pc.org]

      To summarize the portion of that article of interest: A silicon atom is .3 nm across. We are currently building transistor devices on 45nm processes. So if we reduce the process size to a single atom (and that's being generous: how do we control a device composed of one atom?), we'd achieve 150x density, in two directions, which would be 22,500 times improvement. That's enough for less than 15 more doublings, but I'll be generouse and give you the
      • To summarize the portion of that article of interest: A silicon atom is .3 nm across. We are currently building transistor devices on 45nm processes. So if we reduce the process size to a single atom (and that's being generous: how do we control a device composed of one atom?), we'd achieve 150x density, in two directions, which would be 22,500 times improvement. That's enough for less than 15 more doublings, but I'll be generouse and give you the full 15. So if Moore's law is 18 months (and heck, I'll give

  • by B3ryllium ( 571199 ) on Monday February 20, 2006 @10:45AM (#14760824) Homepage
    All this means is that AMD and Intel have to license the technology from a competitor. That's hardly good news for them, and it probably means higher CPU prices for us.

    This isn't good news at all.
    • AMD has been licensing technologies from IBM for many years, including SOI etc. It hasn't seemed to hurt them much. Sometimes leveraging R&D across multiple manufacturers is a bvery good thing because if makes more money available for research.

    • This is only one way to make 40nm chips.

      Additionally, maybe they'll pull off a patent swap, or will make other refinements to the process and contribute them in exchange for a reduction (or elimination) of license fees.

      Or maybe since JSR Micro is a supplier to fabs, if you buy the exotic quartz crystal lens and other equipment from them (maintenance contracts?), perhaps JSR Micro will give the patent license for the process for free.

      Or maybe they won't patent it, or the processor making chips can be altered
    • AMD and IBM already cross-license their IP. Silicon on Insulator was an IBM technology, remember, that AMD now uses.
  • Well, NO. (Score:2, Interesting)

    Just being able to make thinner lines is not that huge a deal.

    There's several large cans of whup-ass that have to be overcome before you can make IC's that much smaller:

    • Lines are 2-D thingies, but conductors are 3-D. Your etching technology has to get X times better to keep up with the line-drawing technology.
    • Same thing with the active components. If you try making the transistor half the old linear dimensions, you have 1/8th the volume of active silicon. This leads to all kinds of problems with lea
    • Re:Well, NO. (Score:3, Insightful)

      by ChrisMaple ( 607946 )
      Current carrying capacity is important mostly for supply rails. In high complexity digital chips, the supply current mostly is routed on the highest metal layers, which are thicker than the layers near the transistors. These high layers are often almost completely dedicated to power distribution, so the lines can be quite wide.
      • Re:Well, NO. (Score:2, Insightful)

        by phsdv ( 596873 )
        You are correct about the power lines, but not all power routing is done in thicker metal layers.

        Besides do not forget that you need a lot of current to charge a very small capacitor very fast! Modern minimal sized transistors can switching a 10 to 100mA each. These currents have to go through a almost 100nm with (copper)line. That are still high current densities!

        Going back on topic, the real amazing thing is that they can make very small lines (30nm) with light of a much larger wave length! Currently th

    • Re:Well, NO. (Score:5, Insightful)

      by lbrandy ( 923907 ) on Monday February 20, 2006 @11:31AM (#14761091)
      * Lines are 2-D thingies, but conductors are 3-D. Your etching technology has to get X times better to keep up with the line-drawing technology.
      * Same thing with the active components. If you try making the transistor half the old linear dimensions, you have 1/8th the volume of active silicon. This leads to all kinds of problems with leakage and power handling capability.
      * A line that's half as wide and half as thick has four times the resistance per unit length, and 1/4 the current-carrying capacity. You can try using a better conductor, but once you get to using copper, you're done.

      Why do I get the feeling that you actually have no idea what you are talking about, and neither do the people who modded you up. Etching, depositing, and lithography all go hand in hand when talking about an Xnm "process", therefore your comment about "thinner lines", in fact, makes no tangible sense. Lithography is the most difficult to shrink, not etching, so I'm really failing to see your point. It has been the main technical hurdle for the past 10 years.

      Furthermore, the "conductors" in a processor aren't nearly as dependant on size as the silicon-feature construction. You can have an extremely layered chip with larger conductors if need be (and modern chips are), so both comment #1 and #3 are reasonably meaningless.

      As for comment #2, yes, you are right: the "smaller transistor" problem is very well understood and it's the reason it takes so long to construct smaller and smaller processes, because the physics and effects must be taken into account. Not all transistors on a chip are the same size, nor can all transistors be shrunk. There is a reason that Intel doesn't slap it's PentiumIV plans into the new 30nm machine, and out comes a new chip. They have to go through and make sure that all the transistors that can be shrunk are, and none of those that cannot, are not. This is a reasonably non-trivial task, but it is not impossible, nor a "large can of whup-ass".

      (PS: Thanks for the math lesson about 2d vs 3d in part 1. You might want to recheck part 3, with that in mind.)
      • What I was trying to convey was the that the simplistic original article didnt even begin to convey the scope and depth of the challenges. Just being able to draw narrower lines isn't the be-all and end-all. If not several cans of whup-ass, at least one of worms.
    • You can try using a better conductor, but once you get to using copper, you're done.

      You misspelled "silver", and ignored whole classes of relatively exotic materials like carbon nanotubes.

  • by sphealey ( 2855 ) on Monday February 20, 2006 @10:50AM (#14760854)
    What happens when they get to -1 nm then? Can they keep going smaller?

    sPh
  • by Sheepdot ( 211478 ) on Monday February 20, 2006 @11:07AM (#14760941) Journal
    I think it was in 2000 that a /. patron actually listed the "complexity"-related proof that Moore's law died in 2000, but here's my contribution:

    Who said what?
    California Institute of Technology Professor Carver Mead was the one who dubbed it Moore's Law, a lofty title Moore said he was too embarrassed to utter himself for about 20 years. David House, a former Intel executive, extrapolated that the doubling of transistors doubles performance every 18 months. Actually, performance doubles more like every 20 months. Moore emphatically says he never said 18 months for anything.

    The rule also doesn't apply to hard-drive densities or to the growth of other devices. "Moore's Law has come to be applied to anything that changes exponentially, and I am happy to take credit for it," Moore joked.


    From:
    http://news.com.com/FAQ+Forty+years+of+Moores+Law+ -+page+2/2100-1006_3-5647824-2.html?tag=st.num [com.com]

    This is not about mhz ratings, though for a while these were doubling along the same rate as transistors per square inch were. Moore's comments were about integrated circuit "complexity" minimum component costs, which, if you are talking about transistors, has remained reasonable accurate. If you are talking about mhz per dollar, then you're going to find this is not accurate at all.

    Long story short, if you had a 2 ghz machine in early 2003 and you're wondering why you aren't on an 8 ghz machine now, it's because mhz ratings have NOTHING to do with Moore's Law. Which is why I suggest referring to the Wiki entry [wikipedia.org] on it.

    Also important is Kryder's Law [wikipedia.org] for HD storage capacity. Within a decade or two we may be able to store all creative works ever created on one drive.

    Case in point: Hard drives increase a thousand-fold in storage space every 10.5 years. In 1996 I purchased a Compaq computer with a 1 gig drive. That was an insane amount of space at the time, but now, 10 years later, it looks like I may be able to purchase my first TB drive soon.
    • "Also important is Kryder's Law for HD storage capacity."

      I followed your link. Now I understand why I never heard about this law... That graph is anything but a straight line. When you have a cluster of points, a gap, and another cluster of points, the easiest thing to come out with is a straight line. On this case, you have the two culsters seppareted by a gap, and not even this way it look straight.

      But thanks. Now I know that HD sizes don't folow an exponential law.

  • by ZachPruckowski ( 918562 ) <zachary.pruckowski@gmail.com> on Monday February 20, 2006 @11:45AM (#14761192)
    Now the problem here is that software seems to be getting less efficient. Even with faster processors, checking your email, web browsing and word processing now takes a lot more RAM than it used to. If software was getting more efficient, or at least holding to the same level, we'd be a lot farther ahead now.
  • Why small? (Score:2, Informative)

    by briglass ( 608949 )
    While this question will undoubtedly reveal my limited understanding of computer engineering, I will ask it anyway... Why is the industry obsessed with getting smaller chips? There's plenty of room on my desktop for a hefty five-inch or even ten-inch diameter chip if it meant greater processing power and/or speed. Is the reason that they shoot for smaller chips that by making the chip smaller and smaller, it can run more calculations per second just in virtue of the speed of the electrons through the circ
    • Quick answer: Once the size of a chip gets to a particular size, it's more economical to split the functionality among several chips. This is because defects exist in the silicon crystals of the semiconductor wafer. As the size of a chip increases, there is an expodentially increasing chance that the chip will have one of these defects in it, ruining the chip. With Small chip sizes, most of chips on a wafer are good. With large chip sizes most of the chips on a wafer ar bad. About 1cm^2 seems to be the
    • There are a number of real reasons you don't want to go this direction as a chip maker:

      1) The chance of a chip killing flaw is proportional to the area of the chip.
      2) The cost of manufacturing each chip is proportional to the area of the chip.
      3) The time to transmit a signal across a chip is proportional to the length of the chip. We can build multi-chip and multi-core setups, but they are slower due to between CPU communication overheads.
      4) Clock distribution is a challenge over a large chip (this may pos
    • Re:Why small? (Score:5, Interesting)

      by necro81 ( 917438 ) on Monday February 20, 2006 @01:24PM (#14761937) Journal
      There are several reasons why the industry is focused on smaller. I do not work for a semiconductor manufacturer, so some of my information may be a little off.

      1) Defects and Yield. Most processors are manufactuered out of silicon wafers 300 mm in diameter. The wafer is very pure silicon (before they start doping it), and the crystal structure is one of the most perfect and regular that humankind has ever been able to produce (at least on a large scale). The industry doesn't do this merely to be perfectionist - it costs a LOT of money and infrastructure to do it - but simply because defects in the crystal structure and silicon purity result in a non-functional chips. The statistics and probabilities behind how many defects get scattered on a wafer, and how many potentially useful chips do those defects knock out has been heavily studied by the industry. The yield that one gets from a single wafer that has many chips on it is a function of defect density and chip size (and other things). A larger chip naturally has a greater chance of having a defect than a smaller chip. There isn't much more that the industry can do to reduce the number of defects on a wafer. In order to increase yield, one of the things the industry banks on is decreasing the chip size. The yield for, say, op-amps (which are very tiny chips) is much higher than for full-blown processors.

      2) Signal Distance. The upper limit of speed for an electronic signal in a chip is the speed of light. That's really fast, but not infinite. In fact, compared to the clock speed of the chip itself, the speed of light becomes significant. The speed of light in a vaccum is 3 * 10^8 m/s. In one nanosecond, light travels 30 cm. For a 4 GHz processor, light can travel only 7.5 cm between clock cycles. In truth, the electronic signals in the chip travel slower than that. So, the distance between various parts of the chip become significant. For a chip as large as several inches, it can take quite a long time, many clock cycles, for bits to make it from one end to the other. Wasted clock cycles = reduced performance. So, in order to continue increasing performance, the industry has worked very hard to keep the size of processor chip very small, so that it takes very little time for signals to travel across it.

      3) Power. It would take a while to explain the physical reasons behind it (see an VLSI or semiconductor textbook for a full analysis), but the operating voltage of a transistor goes down as its physical size goes down. It used to be that 5 V was the working voltage of most all transistors. Then it moved to 3.3 V. Nowadays, the core voltage of most processors is around 1 V. As the operating voltage has decreased, so too has the power dissipation per transistor. The deceasing feature size of transistors and photolithographic techniques is largely to thank for this. The reason that processors now dissipate such a large amount of heat is that, even though the per transistor power has decreased, the number of transistors in the chip has increased more rapidly. If one tried to make a P4 chip using 350 nm techniques (which used to be the standard feature size les than a decade ago), the chip probably would dissipate many hundreds of Watts.

      4) Speed. One would again have to check out a VLSI textbook for a full explanation, but (physically) smaller transistors can switch states faster than large ones. While clock speed is far from the be-all, end-all measure of processor performance, it is generally true that faster transistors result in faster performance (hence the whole notion of overclocking). Using the szame "P4 made using 350 nm technology" example, it would be impossible to run such a chip at anything close to 4 GHz. In fact, I doubt you'd be able to get it to run at even 1 GHz - the transistors would simply be too slow. I don't recall exactly when 350 nm was the standard technology used by the industry, but I imagine that you'd find it coincided roughly to the times when chip speeds were mea
  • Actually the more interesting thing about Moore's law (in terms of total processing power) is that it holds way further back than most people think. Mechanical calulator's total numbers and performance (like Charles Babage's difference engine) were also in accordance with Moore's law, and the two curves fit together quite nicely with the advent of the "many women" approach to computing and electronical computers. Even clock-making reflects Moore's law in the last hundreds of years - in terms of unit numbers
  • by Kaldaien ( 676190 ) on Monday February 20, 2006 @07:52PM (#14764248)
    Moore's law does not specify the density or even number of transistors on an integrated circuit, as many mistakenly assume; it merely states that integrated circuits double in complexity vs. cost to manufacture every 18 months. In fact, new manufacturing techniques alone, which lower the cost to manufacture can satisfy the law.

    Moore's law will probably continue after quantum well transistors are implemented and minituarized. The Cell architecture and push for multi-core processors lend themselves well to Moore's law as well. I would wager designing 4-8 core CPUs, multi-core CPUs with shared caches and the new AMD chips that integrate the memory controller rather than using a Northbridge easily satisfy Moore's law.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...