Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel

Andy Grove Says End Of Moore's Law At Hand 520

Jack William Bell writes "Intel chief Andy Grove says Moore's Law has reached its limit. Pointing to current leaks in modern chips, he says -- "Current is becoming a major factor and a limiter on how complex we can build chips," said Grove. He said the company' engineers "just can't get rid of" power leakage. -- But, of course, this only applies to semiconductor chips, there is no guarantee that some other technology will not take over and continue the march of smaller, cheaper and faster processors. I remember people saying stuff like this years ago before MOSFET." Update: 12/11 22:01 GMT by T : Correction: the text above originally mangled Andy Grove's name as "Andy Moore."
This discussion has been archived. No new comments can be posted.

Andy Grove Says End Of Moore's Law At Hand

Comments Filter:
  • Andy Moore? (Score:5, Informative)

    by ikewillis ( 586793 ) on Wednesday December 11, 2002 @02:45PM (#4864029) Homepage
    Shouldn't that be Andy Grove and Gordon Moore?
  • Moore's Law (Score:2, Interesting)

    by Anonymous Coward
    I wonder if there's a similar law representing toxicity to the environment of semiconductor manufacturing techniques.
    • by Twirlip of the Mists ( 615030 ) <twirlipofthemists@yahoo.com> on Wednesday December 11, 2002 @02:54PM (#4864154)
      Oooh, so Mother Nature needs a favor?! Well maybe she should have thought of that when she was besetting us with droughts and floods and poison monkeys! Nature started the fight for survival, and now she wants to quit because she's losing. Well I say, "Hard cheese."
    • I dont know why dont you use the **COMPUTER** youre on to do some research with software developed on other **COMPUTERS** to find out why the demand for Si based devices is so high..

      Kinda like someone driving to an environmental protest in a yukon..

    • Moore Laws..? (Score:3, Insightful)

      by ackthpt ( 218170 )
      I wonder if there's a similar law representing toxicity to the environment of semiconductor manufacturing techniques.

      Regarding the natural world environment, you're correct, as I've seen some harsh criticism of the volume and toxicity of waste byproduct of semiconductor manufacturing. It's not so simple as, just add a little sand and some magic and voila! It's probably not reported so much because the wonders of innovation and heated competition make for more sexy news writing.

      Something not mentioned much, but observed by more than a few grumbling parties, is the ever increasing size of code. My first encounter with this was upgrading from RSTS/E 7.? to 8.0, which was darn exciting back in the day, yet the size of the kernel would have been about 50% larger if we activated all the features *I* wanted to (and since I was the admin, lemme tellya, it was darn painful to trim off a few features I lusted after to squeeze it into our memory and performance target.) These days, it's often the OS, ever notice how Windows installs got to needing more space than your entire first harddisk? Common response seems to be, just throw more memory at it. Yet, I think there's a Moore-like law with versions of Windows, i.e. every 2 years a new version comes out with twice as much code.

      With physical limitation of the current components nearing the top of the "rate of declinging return" curve, poor performance of the software will eventually catch up with users expectations. Thus, leaner, faster code could become a market direction.

      "** NEW: Office-O-Lux, With 50% less redundant code! ***"

  • by dirvish ( 574948 ) <(dirvish) (at) (foundnews.com)> on Wednesday December 11, 2002 @02:46PM (#4864048) Homepage Journal
    But, of course, this only applies to semiconductor chips, there is no guarantee that some other technology will not take over and continue the march of smaller, cheaper and faster processors.

    I think they will just move away from silicon. Perhaps we have reached the limits of silicon but their is lots of research being done by acedamia and chip manufacturers on other materials.
    • yea, materials science is going to get even bigger in the next couple of years. I should not have changed from Chemical to Electrical engineering ;).

      I dont think semiconductors are going anywhere any time soon, because there is no viable technology that I am aware of to replace them. When we see an alternative form of processing born we can start the countdown to the end of semiconductors.

      I did a paper a while back on Optical based systems using optically controlled polarization rotators, filters, and the like to do binary and trinary logic but the loss and size of such devices is huge.

      • Re:Other materials (Score:3, Informative)

        by mmol_6453 ( 231450 )
        I'm looking forward to semiconductors based on carbon crystals. (read, "diamond.") Germanium, Silicon and Carbon all have the same number of valence electrons (4), which is what makes them good semiconductors.

        Interesting to note, though, that while a germanium PN junction only has a voltage drop of 0.3V, silicon has a drop of 0.7V. Anyone know what the voltage drop would be for a carbon junction?

        Also, one of the main reasons they switched from germanium to silicon was silicon's greater endurance to physical stress. I'm pretty sure diamond will be still stronger, despite the doping.

        Maybe, just maybe, they'll be able to use channels in the diamond crystal as optic conductors. Considering crystalline Si is opaque, that would be a huge advantage. Wouldn't it be great if your clock signal was represented as a flash of light through the entire die? (Have to worry about reflection off the sides, though. Hmm.)

        Anybody else have thoughts or knowledge?
    • Re:Other materials (Score:4, Interesting)

      by IPFreely ( 47576 ) <mark@mwiley.org> on Wednesday December 11, 2002 @04:32PM (#4865197) Homepage Journal
      IIRC, Moore's law says computing power compared to cost will double every so-and-so. This doesn't have anything to do with the specific technology used to generate that power.

      If Chip design is at its limit for reduction, then other factors an still come into play. Parallelization and multiprocessing coming to mind. Multiprocessing hasn't reached any type of limit. As chipsets improve, and CPUs play better together, then overall computing power can continue to increase. (Yeah, all you geeks go on and tell me how multiprocessing isn't really doubling and is not as optimized, yadda yadda).

      The point is, CPU reduction is not the only path to processing power. It has just been the easiest so far. Watch for other paths to be optimized and utilized as this option peters out.

  • Hmm.... (Score:4, Interesting)

    by craenor ( 623901 ) on Wednesday December 11, 2002 @02:47PM (#4864060) Homepage
    I'm curious what kind of results the experimentation in superconductivity and semi-conductors will yield. They sound kind of mutually exlusive. But we may yet see Moore's Law revived and revised...

    Course, that's probably 15 years away...
    • Re:Hmm.... (Score:3, Insightful)

      by panZ ( 67763 )
      How the heck is this +4 interesting?? Its time to hunts some mods... it looks like a 16 year old 1st poster trying to use the word "superconductor" in a sentence, it doesn't contribute to the discussion in any way.

      Yes, we are all curious to see what the future holds for superconductors and smei-conductors (DUH). Yes, the two have nothing to do with each other in the context of this article. Superconductors can be used for transmition of signal, definitely important to computing, but not creation of logic; fundamentally, you need something that passes information in one direction under certain conditions but doesn't pass information when the same outside conditions are applied in the opposite direction (e.g. semi-conductive materials).
      Moore's Law doesn't need to be revived yet. It still holds true. Will it fail eventually? absolutely. But if Grove could pull his head out of his ass and see the wood for the tree, he'd realize that it isn't going to happen soon. People stopped lauging at Quantum, Bio and other computing theories a long time ago. If you step back and look at the big picture, you'll see Moore's law happily marching along and geeks like us making it happen. Grove is just shouting "look at me! i'm talking about theory of computing but saying nothing!"

    • Easy Answer (Score:3, Funny)

      by Genady ( 27988 )
      Just enclose the processor in a static warp field and adjust the speed of light in your new proto-universe. Sheesh, come on people.
    • Re:Hmm.... (Score:3, Insightful)

      I'm curious what kind of results the experimentation in superconductivity and semi-conductors will yield. They sound kind of mutually exlusive. But we may yet see Moore's Law revived and revised...

      Superconduction is overkill at this stage of the game.

      More efficient cooling technologies (liquid cooling, peltiers, etc) could keep Moore's Law alive an extra 5 years. The primary difficulty today is back-inductance. All the current in those tiny wires creates magnetic fields that resist the original current flow(this is why chips get so hot). As we all know, the cooler the chip the faster it can run.(This is because there's less back inductance at lower temperatures, superconduction being the optimal case).

      Anyhow, once current fab processes reach the wall, cooling technologies will probably have several good years of improvement that will directly enchance chip performance. That gives us a little more time to research new approaches(optical computing is probably the next step).

  • by 216pi ( 461752 ) on Wednesday December 11, 2002 @02:48PM (#4864064) Homepage
    ...hearing this news the first time in 1989 and I read it the second time in 1994.

    So. we'll see. I wonder if it now starts applying to graphic cards.
  • 15% ! (Score:5, Funny)

    by nogoodmonkey ( 614350 ) on Wednesday December 11, 2002 @02:48PM (#4864065)
    The industry is used to power leakage rates of up to fifteen per cent, but chips constructed of increasing numbers of transistors can suffer power leakage of up to 40 per cent said Grove.

    No wonder my laptop only gets about a hour of runtime on its battery. :-)
  • Well... (Score:2, Redundant)

    haven't people been saying that for quite a while now?
    • Actually I think what people generally refer to when they pull the moors law card is the quantum effects you get as you make transistors smaller (e.g. 1000 electrons behaves like ten groups of 100 electrons, and 100 behaves pretty much like 5 groups of 20, but when you get down to single digits the behavior is totally different).

      As transistors get smaller fewer electrons are used to trigger it, but when the number get low enough that the quantum behavior of the electron is a factor the current model can not be expanded any more..

      I could be wrong, just my 2 cents..

  • Great! (Score:5, Funny)

    by Lagged2Death ( 31596 ) on Wednesday December 11, 2002 @02:48PM (#4864080)

    So, this means that anything that possibly can go wrong no longer will! Hey, I'm all for that!

    What? Moore's Law? Oh. Nevermind.

  • by Marx_Mrvelous ( 532372 ) on Wednesday December 11, 2002 @02:49PM (#4864086) Homepage
    Seeing as he is a big part of a major CPU firm Intel, is he being short-sighted (which I doubt) or is he trying to brace the market for a slowdown in CPU clock speed?

    It might help the company if expectations for new CPUs aren't higher than what they can produce.

    Personally, my vote goes for optical CPUs as the wave of the future. Larger than curent CPUs might not be a problem if they don't put off much heat.
    • by Rogerborg ( 306625 ) on Wednesday December 11, 2002 @02:55PM (#4864169) Homepage

      Sounds likely. AMD have been saying - and demonstrating - for years that clock speed isn't the whole story.

      Also, we're just not finding compelling applications to drive upgrade cycles in the home and office. We have a few years until we reach movie quality real time rendering, and after that, what do we need more speed for? If AMD and Intel are gambling on the mass market wanting to perform ever faster ripping of movies and audio, they'd better stop supporting Palladium, hadn't they?

      • I'm so tired of hearing people say what do we need more speed for. That's so damned ignorant and yet you people keep repeating it like a bunch of parrots. As we develop more CPU power we develop ways to use it to do things that we couldn't do before. When new classes of CPU come out everyone thinks they're so fast. I remember the first time I used a 68020-based machine compared to my Amiga 500 (with 68000) and I was sitting there saying "holy shit this thing is fast" - now we literally have wristwatches with more CPU power on the market, not just those cracked out linux watches from IBM.

        Also palladium is only significant when you come to palladium-protected content. It will have no effect whatsoever on your ability to rip DVDs because any palladium-protected DVD wouldn't be DVD compliant and wouldn't work in your DVD player. The next public video format is YEARS away and by that time palladium will likely only be an unhappy memory, though it may be supplanted by some other hardware DRM scheme.

        Think about this: A boating video game which uses computational fluid dynamics to determine how the water should behave. Or how about a racing game so realistic that you model chassis flex and installing subframe connectors actually makes a perceptible difference in handling?

        Also we are nowhere near movie quality real time rendering. We need an increase in the number of polygons of several orders of magnitude to really get there, or a revolution in the methods of rendering curved surfaces. There's practically no items in my apartment that don't require either a ton of polygons or a ton of curved surfaces to actually reach the point of photorealism. In addition to actually reach the point of reality some surfaces will have to be rendered with some actual depth. In other words, you will have to render not just the surface of someone's skin, but a certain depth of it to get a 100% realistic representation. Each wrinkle and crease must be rendered at a very high resolution to have them all move properly... Do you see where I'm going with this?

        There will always be a demand for more processing power. As we get more of it on the desktop, operations which formerly required supercomputers (or at least mainframes) will become commonplace. Anyone remember when Visio 2000 came out using M$ SQL server for a backend? Once upon a time you needed a "server class" system just to run a RDBMS, now we use them to store miscellaneous data. (IIRC mysql was a standard part of linux life before visio enterprise 2000 came out but I'm trying to put it in the corporate perspective.)

      • There will always be a need for more speed, and for more space. Scientists will never be able to simulate the universe, since the computer would need to be greater than that which it represents. Certainly some parts, but not it all down to every detail. If computers were capable of movie quality real time rendering, then there is still going to be room for improving AI tremendously, along with complex physics engines, and much more. We will always find more ways to burn clock cycles for more realism. If you are stuck for ways to burn your CPU then just ask me. I'll give you a hint on something new you can add to your game to improve realism and run slower.

        As for disk space, similar argument applies. The more space we have the more we will fill. We will have increasingly better quality films, higher framerates, etc, up until the point where we are recording details from many different angles with miniature cameras, and keeping the data nicely formatted and referenced for database use. Our needs will scale with the technology. We are always hoping for just a little more, and after that we see the next stone to hop to.

        So that I'm not completely critical, the home user will find little reason to upgrade, as indeed they already do. But I'd say this has always been the case. Average Joe likes to get the fastest PC and the best DVD player, but he only wants to upgrade every so many years. Whereas scientists, gamers, hobbyists, etc, like to update regularly to take advantage of new advances that they can use immediately. So I'd say the cycle will continue much the same.
    • by Usquebaugh ( 230216 ) on Wednesday December 11, 2002 @03:05PM (#4864294)
      Optical CPUs are still only research projects and nobody is sure these things are going to work as well as silicon. I talked with somebody at Livermore regarding feasability and his take was never. 30+ years of chip evolution is not going to be beaten by a few research projects. The bar is set to high for optical to come in.

      I'm more hopeful that we might get away from the whole stupid clock idea and go asynchronos. This area seems to be opening up more and more. It's beena round for ever but nobody could find a reason to go to the extra expense.

      If Moores law fails then I guess SMP will become mainstream. I mean it's either that or software engineers write programs that are efficient. I expect to see an aerobatic display by flying pigs before I see an efficient program.
      • why couldn't it?

        one physicist brought down 200 years of Physics research.
      • There's no reason why some development in some other area couldn't suddenly make optical CPUs fantastically inexpensive and easy to make.

        Of course I do agree that an asynchronous architecture makes more sense in some ways but I should think it would increase the complexity of programming for the system.

        In the short term I think solutions like intel's hyperthreading (only not half assed) are the answer. I think AMD is in a unique position where it could implement an honest to goodness 2-way SMP on a single chip because of the way clawhammer is laid out, which is to say that it uses hypertransport. As we know, hyperthreading provides only small performance enhancements compared to actual SMP.

      • I'm more hopeful that we might get away from the whole stupid clock idea and go asynchronos.
        Clocks aren't so bad, they make a lot of things very simple. Asynchronous is getting easier, and there are lots of people working on it, but the end result isn't fantastically better -- you get average performance per-stage instead of worst-case performance per-stage. For most modern processors, that's not much of a difference; the stages are typically pretty well balanced. Stages that would be a very burdensome critical path are just split up.

        Of course, there are always simpler operations that can get done a bit faster -- but as wire delay gets worse and transistors switch faster, routing information is becoming much more critical than computational delay. Calculation is pretty cheap; forwarding is expensive.

    • My theory is that they are going to cut back on research, because 4-6 ghz chips are going to be fast enough to most things for most non-specialized people.

      So joe sixpack won't be motivated to cough up the dough for the upgrade.

      No money, no research, no new speed barriers broken.

      Specialized markets (cgi movie production? weather modelling?) will require lots more horse power. But most corporate offices won't need it. In these cost-conscious times, that means they won't get it.

      So the market for the new high-end processors will be much smaller. This will probably lead to a stratification of the CPU market. Like the difference between Celeron/Duron and P4/Athlon, but with a much bigger difference.

      I read some where, maybe on slashdot, that what will push the next threshold of CPU processor speeds will be driven by the rise the accurate, real-time natural-language voice-recognition software. (and, along with it, language-to-language translation). That kind of processing requires lots of cycles, but has broad, not specialized, applications.

      The exception, and possibly the hole, to this theory is games. But DOOM III looks pretty damn impressive. What hardware does it require?

      Just idle speculation...
  • by jfroot ( 455025 ) <darmok@tanagra.ca> on Wednesday December 11, 2002 @02:49PM (#4864087) Homepage
    As the submitter eluded to; this has been said so many times before that I simply don't believe it. I remember reading the same thing about 100Mhz being the fastest we could build. Technology will find away as long as people are willing to buy it. And people will be willing to buy it because we all need to run Quake 4, 5, 6 etc.
    • Re:sure sure... (Score:4, Interesting)

      by cheezedawg ( 413482 ) on Wednesday December 11, 2002 @02:58PM (#4864203) Journal
      A couple of things:

      - Grove said basically the same thing you said- if better insulators or other technologies aren't developed, Moore's Law could become "redundant" in 10 years.

      - That said, there are other ways to increase chip performance other than increasing transistor density according to Moore's law. Grove cites a few of them in that article (more efficient transistors, multiple cores, etc). So you will still be able to play the latest Quake in 10 years.
    • Re:sure sure... (Score:5, Interesting)

      by grungebox ( 578982 ) on Wednesday December 11, 2002 @02:59PM (#4864213) Homepage
      Well...it's a little bit harder to manage this time around. As transistors get smaller, if I remember correctly, one of the main reasons for current leakage is quantum tunneling between the source and drain of a given transistor as the channel length decreases (I think). Also, you get leakage through electrons/holes tunneling though the gate of the MOSFET as the insulating material decreases in width. You can't really outmaneuver quantum mechanics.

      Of course, I think something else will pop up (like the aforementioned optoelectronic switch, perhaps), since companies are resourceful folks. Academia is good about researching ways to reduce current leakage, and my prof says high-K dielectric insulators are a good way to reduce leakage through the gate. Whatever...something will come up.

      My point is that the situation now is a lot more physically complex than that of, say, 1989 or something, where the limitation was "we can't go past 100 MHz because we haven't thought of a way to do it!" Now it's more "we can't go past [whatever]Ghz because of goddamn physics!"

      By the way, anyone else think Gordon Moore gets a little too much by having a "law" named after him? I mean, sheesh...all he did was draw a freakin' best-fit curve on a plot of easily-found data. And on top of that, Moore's Law isn't a law at all...it's a statistic.

      • Re:sure sure... (Score:3, Interesting)

        by Tiroth ( 95112 )
        You haven't convinced me that the situation circa 2003 is any different than that c. 1989. As then, physics is placing limits on the performance of current design processors.

        I think it is exceedingly likely that there will be advances in materials science and manufacturing that will prolong the validity of Moore's Law. It continues to be feasible to decrease core voltages, and newer heat-removal technologies and better dielectrics are showing promise. Even if each avenue provides only a linear reduction in dissipation, or a linear increase in our ability to deal with it, the end result is that the synergy allows us to eke out a few more years of exponential growth.

        Lather, rinse, repeat.
  • "The End" (Score:4, Insightful)

    by FosterSJC ( 466265 ) on Wednesday December 11, 2002 @02:49PM (#4864088)
    The end of Moore's law is heralded on Slashdot every 2 months or so; it comes at the hand of new materials (copper, etc), new layering techniques, the ever-popular quantum computing, etc. Frankly, it doesn't seem to me to be that useful a benchmark anymore. The article says it will come sooner, but I foresee in 7 to 10 years the physical production, leakage stoppage and general quality of the chips will be so perfected that Moore's law will no longer be applicable to silicon chips. But, by then, new sorts of chips will be available to pick up the slack. So let us say farewell to silicon, and enjoy it while it lasts. It is like the fossil fuels problem really, except the industry is slightly more willing to advance, having set up years in advance a healthy pace to keep.
  • Great! (Score:5, Funny)

    by Anonymous Coward on Wednesday December 11, 2002 @02:49PM (#4864090)
    Now I can just buy a really fast computer and know that I'll never need to upgrade again!
  • by sisukapalli1 ( 471175 ) on Wednesday December 11, 2002 @02:49PM (#4864091)
    I hope this means back to actually finding ways of optimizing code, and not the standard "We can throw money at it", or "Next year computers will be twice as fast".

    However, may be better processor architectures and clusters will keep the march going.

    Either way, I believe some progress would be made.

    S
    • by Anonymous Coward
      Perhaps we could write code to optimize code, then run that code through the code optimizer?
      • Perhaps we could write code to optimize code, then run that code through the code optimizer?

        I believe the SOP at M$ is to take the result of the above and the run it through the optimizer again. Usualy this results in a 5-7% speedup.

        According to most sources the plan for LongHorn is to at the end run it through the optimizer one more time. They think that this could net another 2-3%. We'll see.
    • Code optimization is actually the least of your worries. Most of the latency in modern desktops, for example, comes from memory access, not algorithmic slow-downs.

      Try structuring the data better, and you will go far.
    • by Jeppe Salvesen ( 101622 ) on Wednesday December 11, 2002 @03:59PM (#4864845)
      I guess algorithm analysis will at some point become more mainstream again. I suppose application profiling will also become more popular.

      Interestingly, the available memory will continue to grow, so we might end up structuring our data structures so that access time will be minimal. That is - our data structures will continue to change focus from compactness to raw speed. And big O analysis is part of that picture.

      I think we'll see some interesting things happen with fiber technology, though. When those envisioned optimal silicone chips become commonplace and thus really cheap, all appliances might run on them, and thus make it feasible to distribute your processing between your computer, your fridge and your iron. We'll just interconnect everything - perhaps a new fibre connector in our electricity plugs.
  • Newton? (Score:5, Funny)

    by pa3gvr ( 548273 ) on Wednesday December 11, 2002 @02:49PM (#4864096) Homepage
    As long as Newton's law stays in effect I am not to worried.

    BTW do most of the users really need fast machines? I can do all my work without any problems on my 333Mhz PII

    CU :-) Sjaak
    • Re:Newton? (Score:3, Funny)

      by stratjakt ( 596332 )
      >> BTW do most of the users really need fast machines?

      Yes. Better, faster, cheaper.

      >> can do all my work without any problems on my 333Mhz PII

      And you could probably ride a horse to work, too. So what?

    • Re:Newton? (Score:3, Insightful)

      We don't. We just need less bloatware.
    • Re:Newton? (Score:3, Informative)

      actually newtons laws were proven wrong a long time ago. they fail to correctly estimate the orbit of mercury correctly as when you close to a body of large mass the inverse square approximation down not work so well. Einstienien mechanics that model as a deflected curve predict the orbit of mercury bang on.
      so sorry newtons laws are already old
  • Shucks... (Score:5, Funny)

    by swordboy ( 472941 ) on Wednesday December 11, 2002 @02:50PM (#4864100) Journal
    I was waiting for the commemorative Pentium XT running at 4.77GHz.
  • by vasqzr ( 619165 ) <vasqzr@noSpaM.netscape.net> on Wednesday December 11, 2002 @02:50PM (#4864104)

    Intel stock goes down like 50% ...

  • by jazman_777 ( 44742 ) on Wednesday December 11, 2002 @02:51PM (#4864123) Homepage
    If it's the end, it wasn't a law to start with, then, was it?
  • I wanted to live to see 1 atom wide transistors

  • by Headius ( 5562 ) on Wednesday December 11, 2002 @02:51PM (#4864126) Homepage Journal
    I've always had issues with calling Moore's Law a "Law". Nobody has conclusively proven it. It should instead be called "Moore's Hypothesis" or "Moore's Theorem" if you're more optimistic...
  • Well, possibly... (Score:2, Informative)

    by Jay Addison ( 631128 )
    People have been predicting the end of Moore's law for ages - it seems to come up every couple of years at least. But, technology always seems to beat the critics (the poster mentions MOSFETs).

    Just recently I attended a seminar by a Cambridge lecturer discussing the performance benefits of quantum computing - 1/n*root(n) maximum search relationship for unsorted lists, which seems silly - but thats just quantum stuff for you - who knows, maybe it'll be the next jump to break against Moore's law. Does still look like its a while off though.
  • Arrogant Intel (Score:2, Insightful)

    by Bendebecker ( 633126 )
    Just because Their engineers can't solve the problem, the problem must be unsolvable.
    • Re:Arrogant Intel (Score:2, Interesting)

      by cheezedawg ( 413482 )
      Hmmm- I didn't see anywhere in the article where Grove said it is "unsolvable". Lets read what the article actually said:

      He said the company' engineers "just can't get rid of" power leakage.

      Sounds to me like he is just saying Intel hasn't solved it yet (but neither has anybody else).

  • Thank Godness! (Score:3, Insightful)

    by dokebi ( 624663 ) on Wednesday December 11, 2002 @02:53PM (#4864143)
    Moore's law is finally coming to an end. Seriously, continous and rapid advance of processing power is the one thing that's holding back affordable universal and pervasive computing in schools. These cash strapped schools cannot afford to replace text books every two years, let alone computers that cost hundreds more. Things are better now because relatively useful computers can be had for very cheaply, compared to just a few years ago, but scrapping Moore's law altogether is even better. Steve Wazniak also agrees [wired.com]
  • Measured by what? (Score:2, Insightful)

    by narq ( 464639 )
    While Intel's batch of in-design processors may not keep up, and the engineers' current take on things seem to be dim, I would think a longer period would have to go by before it could be determined whether Moore's laws will hold. New designs have caused great jumps in the past that have kept the overall change of things in line with Moore's law.
  • Well maybe... (Score:5, Insightful)

    by Chicane-UK ( 455253 ) <chicane-uk@@@ntlworld...com> on Wednesday December 11, 2002 @02:54PM (#4864156) Homepage
    ..if Intel and AMD hadn't got locked into that stupid GHz battle and instead concentrated on optimizing their CPU design (rather than just ramping up the speed silly amounts) then there might have still be a few more years left before it became such a problem.

    Maybe thats the way forward? Optimisations and improvements on the chips instead of raw clock speed....?
    • Re:Well maybe... (Score:5, Insightful)

      by dillon_rinker ( 17944 ) on Wednesday December 11, 2002 @03:04PM (#4864291) Homepage
      If there'd been no competition, you're absolutely correct that we'd have had better CPU designs, and overall performance would likely have been orders of magnitude below what it is now.

      So, speed and feature size are as good as they're going to get, and they were easy to do. Now we can work on the hard stuff with the benefit of all the processor power we've got sitting around unused.

      Don't optimize the hard stuff until you've optimized the easy stuff.
  • by MattW ( 97290 ) <matt@ender.com> on Wednesday December 11, 2002 @02:54PM (#4864159) Homepage
    The number of stories posted on Slashdot about the end of Moore's Law will double every 18 months.
  • Ever so often someone prominent proclaims, "The end of Moore's Law is near!" People listen, because this person is usually someone people listen to. And then he's proven wrong.

    It may be true that the current chip technology has reached its end, no more progress possible. But believing that's "the end" is shortsighted. There has always been yet another way to see the law complied with. I do not doubt we will again this time. Be it optical, asynchronous logic, new materials, or whatever, it will probably happen.

    It's not time to call Moore's law dead just yet.
  • Someone or other is ALWAYS saying that we are about to hit the end of Moore's so-called "Law".

    Then again, they said it woudl be impossible to make semiconductors using geometries of less than 1 micron; they said that 8x was the fastest a CDROM could ever hope to read; they said that 14,400 baud was the fastest the telephone system could handle; and so on.

    They were all wrong, just as Mr Grove most likely will be.

    Still, I suppose if you prophecy doom often enough, you will eventually be right!
  • If he can't get a 1.21 THz Pentium9 to surf the web, chat on AIM, and have his kids type school reports on? How can people possibly learn, communicate, or work? Oh, the humanity!

    In Soviet Russia, Moore's Law ends YOU!
  • by stratjakt ( 596332 ) on Wednesday December 11, 2002 @03:01PM (#4864244) Journal
    or am I wrong?

    So we're running out of ways to pack more and more transistors into a device. There's still a ton of room to improve the layout of those transistors, the world is full of whines about x86 architecture.

    This doesnt mean 'computers are as good as they're going to get', it just means the fabrication plants are as good as they're going to get.
  • ...but did anyone catch the last paragraph.

    Finally an American CEO that understands the problems of shifting operations overseas.

    We are definetly mortgaging the future of our children for today's short-term buck. Far too many businesses are willing to sell their souls to the people that could one day go to war with the US.

  • by ekrout ( 139379 ) on Wednesday December 11, 2002 @03:01PM (#4864256) Journal
    How many times do we have to hear people put their foot in their mouth? I would have thought Intel would've known better!

    But what ... is it good for?
    - Engineer at the Advanced Computing Systems Division of IBM, 1968, commenting on the microchip.

    I think there is a world market for maybe five computers.
    - Thomas Watson, chairman of IBM, 1943.

    What can be more palpably absurd than the prospect held out of locomotives traveling twice as fast as stagecoaches?
    - The Quarterly Review, England (March 1825)

    The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it. . . . Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient.
    - Dr. Alfred Velpeau (1839) French surgeon

    Men might as well project a voyage to the Moon as attempt to employ steam navigation against the stormy North Atlantic Ocean.
    - Dr. Dionysus Lardner (1838) Professor of Natural Philosophy and Astronomy, University College, London

    The foolish idea of shooting at the moon is an example of the absurd length to which vicious specialization will carry scientists working in thought-tight compartments.
    - A.W. Bickerton (1926) Professor of Physics and Chemistry, Canterbury College, New Zealand

    [W]hen the Paris Exhibition closes electric light will close with it and no more be heard of.
    - Erasmus Wilson (1878) Professor at Oxford University

    Well informed people know it is impossible to transmit the voice over wires and that were it possible to do so, the thing would be of no practical value.
    - Editorial in the Boston Post (1865)

    That the automobile has practically reached the limit of its development is suggested by the fact that during the past year no improvements of a radical nature have been introduced.
    - Scientific American, Jan. 2, 1909

    Heavier-than-air flying machines are impossible.
    - Lord Kelvin, ca. 1895, British mathematician and physicist

    Radio has no future
    - Lord Kelvin, ca. 1897.

    While theoretically and technically television may be feasible, commercially and financially I consider it an impossibility, a development of which we need waste little time dreaming.
    - Lee DeForest, 1926 (American radio pioneer)

    There is not the slightest indication that [nuclear energy] will ever be obtainable. It would mean that the atom would have to be shattered at will.
    - Albert Einstein, 1932.

    Where a calculator on the ENIAC is equipped with 19,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps only weigh 1.5 tons.
    - Popular Mechanics, March 1949.
    (Try the laptop version!)

    There is no need for any individual to have a computer in their home.
    - Ken Olson, 1977, President, Digital Equipment Corp.

    I have traveled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won't lastout the year.
    - The editor in charge of business books for Prentice Hall, 1957.

    [Quotes from this page [athenet.net].]
    • by Waffle Iron ( 339739 ) on Wednesday December 11, 2002 @03:42PM (#4864660)
      OTOH, you could probably dig up thousands of quotes made in the 1960s that optimistically predict continual improvements in the speed and cost of airplanes. Most airliners will be supersonic, etc.

      From 1903 up until that point, aircraft design was on a curve almost impressive as Moore's law. In the 1960s, the rate of improvement hit a wall, and there have only been small incremental improvments since then. (And much of that has been achieved by "cheating": glomming onto Moore's law by cramming electronics into the aircraft.)

      Electronics technology is bound to hit a similar limit of economically feasible improvments sooner or later.

  • Intel announced plans to "double the number" we put in front of the letters "G, H, and Z" every 18 months.

    "It will be a boon to our company," said Grove. "Consumers like more G,H, and z's and investors like more money!"
  • Current process technology may be running into limits. Bulk CMOS, SOI, copper interconnects, stretched silicon - these are all improvements but may be at the end of the road. In any event, there are newer things on the horizon like GaAs and various magicks with industrial diamond. So while CMOS may be doomed (and really, I think it is, but the old girl's still got a few years left in her), the semiconductor industry isn't going to shrug its collective shoulders and say "That's all folks! We had a fun time of it, we're going to have to stop now."
  • by bartman ( 9863 ) on Wednesday December 11, 2002 @03:04PM (#4864282) Homepage Journal
    ... stating that Moore's Law is questioned once every 16 months.
  • by Drakonian ( 518722 ) on Wednesday December 11, 2002 @03:08PM (#4864334) Homepage
    If you restrict it to silicon-based ICs as we know them today, this may be right. Intel is the expert on this after all, and I'm willing to take their word.

    However, if you define Moore's law as computational capacity doubling every 18 months, than it is very unlikely to end. If you project back to well before integrated circuits, or the law itself, computational capacity has been growing at this same exponential rate for many decades - even back to the earliest mechanical based "computers". There will be something to replace the current paradigm; the paradigm has already changed numerous times without throwing off the exponential curve.

    For a facinating look at this phenomenon at what it holds for the future, I'd recommend The Age of Spiritual Machines: When Computers Exceed Human Intelligence [amazon.com] by Ray Kurzweil.

  • Moors Law (Score:5, Insightful)

    by avandesande ( 143899 ) on Wednesday December 11, 2002 @03:10PM (#4864353) Journal
    Is an economic law, not a physical one. Lack of demand for high-powered processors is going to slow the progression in processor speeds.
  • by Animats ( 122034 ) on Wednesday December 11, 2002 @03:17PM (#4864421) Homepage
    "Grove suggested that Moore Law regarding the doubling of transistor densities every couple of years will be redundant by the end of the decade." Not this year, eight years out.

    That's about right. It's a bit more pessimistic than the SIA roadmap, but it's close. Grove was just stating, for a general audience, what's accepted in the semiconductor industry. Optical lithography on flat silicon comes to the end of its run within a decade. Around that point, atoms are too big, and there aren't enough electrons in each gate.

    There's been a question of whether the limits of fabrication or the limits of device physics would be reached first. Grove apparently thinks the device problem dominates, since he's talking about leakage current. As density goes up, voltage has to go down, and current goes up. A Pentium 4 draws upwards of 30 amps at 1.2 volts. We're headed for hundreds of amps. It's hard to escape resistive losses with currents like that.

    There are various other technologies that could lead to higher densities. But none of them are as cheap on a per-gate basis.

  • Mo(o)re or less? (Score:3, Interesting)

    by photonic ( 584757 ) on Wednesday December 11, 2002 @03:21PM (#4864471)
    I attended a talk some 1.5 years ago by guy from Philips NatLab (home of the CD), which was called "Mo(o)re or less?". Although the talk was extremely boring and i forgot the final conclusions i do remember some potential showstoppers he listed:

    -Of course the ultimate limit of a 1 atom transistor, can't remember the date this would occur
    -Limited speed of signals acros the chip: If the clock frequency gets much larger a signal would require several buffer stages to reach the other side.
    -Capacity of wires gets more important: the interconnects don't scale at the same pace as the transistors. Their finite capacity limits clock speeds

    Some non-technical reasons:
    -Increasing costs of new fabrication processes: each new increment is more expensive.
    -Limited manpower to design circuits with more and more transistors. This probably means that a larger area of the chips will consist of 'dumb' circuits like cache.
  • It seems to me.. (Score:3, Insightful)

    by xchino ( 591175 ) on Wednesday December 11, 2002 @03:40PM (#4864640)
    Moore's law hasn't reached any limits, we have. If this is a barrier we need to overcome, we will overcome it. We could be be thousands of years ahead of our time in our technology if that was our priority as a race, or even as individual nations. If we *needed* faster, smaller processors, the governement would pour money into R&D and more brilliant minds could be gathered to work cooperatively and the results would be results :)

    Seriously, we've risen above much greater challenges than this..

    It sorta sounds like Intel is about ready to quit trying to innovate, perhaps this is time for AMD to take the lead..
  • Apropos links (Score:3, Informative)

    by auferstehung ( 150494 ) <tod.und.auferstehung bei gmail.com> on Wednesday December 11, 2002 @03:52PM (#4864755)
    Richard Feynman's address [zyvex.com]to the American Physical Society is a good intro to the physical limitations of miniaturization as it applies to Moore's Law. Also intersting, is the Law from the Horse's mouth found on this [intel.com] Intel page.
  • by GroundBounce ( 20126 ) on Wednesday December 11, 2002 @03:52PM (#4864760)
    The power is largely dissipated as heat. [emphasis added]

    Duh! Funny, I have never seen any (properly connected) microprocessor chip generating much in the way of light , sound, or X-rays. I suppose a teensy weensy amount might go off as RF emissions, but not from the DC leakage current.
  • by theCat ( 36907 ) on Wednesday December 11, 2002 @03:56PM (#4864812) Journal
    Recall, AMD just said they are done trying to up clock speeds all the time. Now Intel is outting themselves, too. The fact that these companies are not saying things like "we need to go to other materials to get higher clock speeds" is because 1) it costs huge $$$ to research and develop new materials, 2) it costs serious $$$ to change fabs to use new materials, 3) NOBODY (no, not even you) wants to continue to pay for increased clocks when there is almost zero benefit in real applications.

    Moore's Law is not dead. What is dead is the need for Moore's Law. I am not alone in noticing that, after 20 years of regular performance increases, things are now pretty good on the desktop, and excellent in the server room. Real changes now need to be in software and services. Further, high-performance computing is going the route of multiple cores per CPU, multiple CPUs per box, and clusters of boxes. The latter is probably the biggest innovation since Ethernet. So, who needs Moore's Law?

    Intel and AMD know *all* this. They want out of the clock race, and yesterday. They want to get into the next level of things, which is defining services and uses for their existing products. They are seeing the end of the glamour years of the CPU and the rise of the era of information applicances, which *must* be portable. Users *will* be far more sensitive to battery life and perceptions of performance (latency and ease of use) and far less sensitive to theoretical performance measures.

    Flame me if you like, but the geek appeal of personal computers is disappearing. Sure there will be people who fiddle with CPUs as a hobby, just as they did 30 years ago when the Apple computer was born to serve a small group of hobbyists. But is that the mainstream? Is that going to support Intel and AMD in their race? Are those companies going to promote a revolution in fab technology, to the tune of half a trillion dollars in investment and technology between them, just to support geeky hobbyists? They could, but they won't, because that is not the future. It is the past.

    The future will still be interesting, mind you, but the challenge has changed. A phone that fits in your rear molar and runs off chemical reactions with your own saliva looks far more lucrative to these companies than a CPU that runs at 100Ghz and consumes as much power as an appartment complex.
  • Threshold Voltage (Score:5, Insightful)

    by Erich ( 151 ) on Wednesday December 11, 2002 @03:57PM (#4864823) Homepage Journal
    One of the problems with "leaky" parts is that the threshold voltages are kept very low. This makes the transistors switch much faster, but makes them leak current quite a bit.

    You can fairly easily raise the threshold voltage (for a process). It makes the chip slower, but leaks less current (and therefore usually uses less power). This is one of the key elements of "Low Power" processes like CL013LP.

    For more information, the Britney Spears' Guide to Semiconductor Physics [britneyspears.ac] is sure to help.

    Interestingly, Using leaky transistors that switch faster has been a trick used for a very long time. One of the reasons the Cray computers took so much cooling was that they didn't use MOSFETs, their whole process was based on PNP and NPN junction transistors. For those who don't know much about transistors, FETs (or Field Effect Transistors) make a little capacitor that when you charge it up (or don't charge it up, depending), it lets current flow through on the other side. It takes a while to charge up the capacitor (time constant proportional to Resistance times Capacitance, remember!), but once it's charged there isn't any current (except the leakage current) that flows through.

    At least, that's what I recall from my classes. I didn't do so well in the device physics and components classes.

  • Um...no? (Score:3, Insightful)

    by sielwolf ( 246764 ) on Wednesday December 11, 2002 @04:27PM (#4865133) Homepage Journal
    I thought the root of Moore's Law wasn't the technology involved but the drive for improvement in computation. So that the chips may not improve beyond a certain point but then making a massively parallel system on a 2"x 2" card would still go into Moore's Law. It is hardware independent.

    I'd never put a limitation on this since somebody's going to come up with an idea to eek out more clocks.
  • by panurge ( 573432 ) on Wednesday December 11, 2002 @04:42PM (#4865295)
    First, Moore's Law is about transistor density, not clock speed. If it runs out by end of the decade that's still an increase of around 32X - and unless we suddenly have a need to become amateur weather forecasters, it's difficult to see any obvious applications. [cue enormous list from /. readers].
    We've now reached the stage where handheld devices have the same sort of processing power and memory of respectable desktops of a few years back, and I find it interesting that the sudden big hype is the tablet PC, which is relatively low speed but has good battery life. That could be the direction things are going, and if so it is hardly surprising Andy Grove is worried about leaking electrons, what with Transmeta, Via and Motorola/IBM having lower power designs.

    A case in point about technology demonstrators. Someone mentioned aircraft. OK, how much faster have cars got since, say, 1904 when (I think) RR first appeared? Not an awful lot, actually. They are vastly more reliable, waterproof, use less fuel, handle better, are safer, and enormously cheaper in real terms BUT they go about the same speed from A to B and carry about as many people. And they are still made of steel and aluminum, basically the same stuff available in 1904.

    This is far from a perfect analogy because, of course, the function of the computer keeps getting reinvented: it is applied to more and more jobs as it gets cheaper, more powerful, and more reliable. But it does point out that the end of Moore's law is not the end of research and development.

  • by foxtrot ( 14140 ) on Wednesday December 11, 2002 @04:57PM (#4865445)
    Colloquially we speak of Moore's Law and we mean "Chips get twice as fast every 18 months."

    This is not what Gordon Moore said. Moore's statement was based on transistor density. Indeed, perhaps we may not be able to cram transistors together as much in the not too distant future.

    Does this mean that chips won't continue to get twice as fast every 18 months? It would surprise me if processors slowed down their rate of speed growth much this decade. As people begin playing with digital video on the desktop, as people write games that can actually push enough information to a GeForce4 FX to make it worth spending money on, people are still going to want faster and faster machines. And while AMD still exists as a competitor to Intel, even those people who don't really need a 7 GHz machine are going to find that that's what's available.

    So while Moore's law, as it was stated, may be nearing its end, Moore's law, as it is usually spoken will probably stick around for a good while longer.
  • by Fnkmaster ( 89084 ) on Wednesday December 11, 2002 @04:58PM (#4865448)
    Several posters have pointed out that in the longer term this may lead to a resurgence of interest in algorithmic efficiency, parallel algorithm development to take advantage of available parallelism (clustering, SMP, etc.). Certainly there is merit to these arguments, and I do think interest in these topics will increase greatly over the next few years, at least for problems where they are necessary (i.e. where computational power is a limiting reagent, which isn't really the case in most business software).


    Honestly, I think a bigger trend will be to take advantage of formalisms that let developers develop more reliable and stable software. Now, I know and you know that things like functional programming have been out there for years, and haven't succeeded because first, they were too slow and therefore wasted too many processor cycles. This is obviously much less of a problem now - Java "wastes" lots of processor cycles, but for a lot of software needs, saves so many human "thinking" cycles that it pays off in spades for businesses that need business or enterprise software to Do Stuff for the back-end sides of industry.


    So what big problem(s) are left in the software world? Well, people still bitch about how fucking unreliable most software is. In particular, core, critical system areas, like the interface between hardware and software - as more hardware is out there, and more drivers are developed, and backwards compatibility is an issue, hardware interactions have not become substantially more reliable. And frankly a lot of applications themselves, have become substantially less reliable - the big problem is that adding features and changing GUIs seems to break too many things and introduce too many potential problems (look at Outlook XP vs. Outlook 2000 - fixed some security holes, made a prettier GUI, and made the damn thing crash all the time).


    Look at a lot of the academic work being done in computer science, especially in programming language design, operating system design, parallel algorithms and parallel languages. Sometimes researchers head off down dead-end paths, but sometimes they have it right, and it just takes a while for industry to see what they need this stuff for. That being said, it'll always be cheaper to teach people "Programming in Java 101" in India and then hire 1000 of them to hack away at code, admitted usually for the most uninteresting and repetitive types of development work (at least, this will hold until economic parity in the third world becomes a reality).

  • I wonder... (Score:3, Interesting)

    by waltc ( 546961 ) on Wednesday December 11, 2002 @05:02PM (#4865486)
    ...what this means in relation to Intel's .09 micron work with Prescott (slated for late next year)? Could be nothing, could be indicative of INtel hitting some stone walls in .09 micron development (which I always knew would be a tough row to hoe for complex cpus.)

    Read one post earlier in which the poster thought AMD was abdicating a "clock speed" race. Obviously, this sentiment, among so many like it, comes from Hector Ruiz's speech last week in which he said that AMD wasn't going to do "technology for technology's sake." I wish Hector had made himself a bit clearer...;)

    What I think he meant was that unlike Intel with Itanium, AMD was not going to design brand-new technologies with no practical worth simply for the sake of performance (because Itanium has no software it's very nearly useless--except for doing PR benchmarks for Intel.) That's why AMD chose to do x86-64--because it is technology for practicality's sake. That's my take on that statement.

    Also, AMD has been out of the "clock race" ever since they designed the K7. The race AMD wants to win, and has been winning, is the "performance race" which doesn't depend on raw MHz. Any P4 will be much slower than any K7, when clocked at the same MHz speed. That's why AMD's been using performance ratings--because they are much better measures of performance than mere MHz speeds could ever be between competing cpus with differing architectures.
  • by Ungrounded Lightning ( 62228 ) on Wednesday December 11, 2002 @05:40PM (#4865886) Journal
    Eventually you will reach a limit on the size of the individual swtiches. The one the article gripes about appears to be the sloppy wave function of the electrons letting them tunnel across the junction. But matter is lumpy (quantized) and eventually you'll hit a just-a-few-atoms wall.

    But there's more that can be done - in terms of geometry and organization.

    Current chips are a single two-dimensional array of components (or sometimes a small number of layers). But build your gates and interconnects in 3-D and you can go farther on two fronts:

    - Speeding up the individual functions a bit further. (The more complex, the more improvement).

    - Combining a LARGE nubmer of parallel elements into a small space (so they can talk to each other quickly).

    Back in the '70s I had a rap I'd do called "preposterous scale integration". Basic idea:

    - Use diamond for the semiconducting material (because it conducts heat VERY well).

    - Grow a LARGE sheet of it, writing the domain doping and interconnects with ion beams as you go.

    - TEST the components as you go:
    - Negative power lead is a slow (low accelleration voltage) electron beam.
    - Positive power lead is a fast (high accelleration voltage beam) electron beam, causing secondary emission of more electrons than are in the beam.
    - Test injection probes are smaller versions of the power leads.
    - Test probe is a very slow electron beam, where the electrons turn around at the surface, and a positively-charged region will suck 'em to the chip.
    (These are all variants of electron microscope imaging hacks that were in use as far back as the 70s.)

    - If a component fails, turn up the current, vaporize it, and deposit it again. Repeat until you have a good one.

    - When you're done with the layer, don't stop. Deposit another layer, and another, ... Keep this up until you are done. Laying out your gates for minimum signal run length means you end up with a cube, or something close to it.

    - Apply power to two opposite faces of the cube. Use bus bars the size of the cube face - at least near the contact point - to minimize IR drop. Use a good conductor, like copper or silver.

    - You need a LOT of cooling. So circulate cooling liquid in the buss bars. (Copper and silver are also good heat conductors, and water is a terrific heat carrier.)

    - The other four faces are for I/O. Use semiconductor lasers, photodiodes, and fiber optics light-pipes. You can COVER the faces with fibers. Put your drive electronics and SerDeses in the layer just under the pipes - or dope the index of refraction of the diamond to make a light-pipe into the depths and distribute them throughout the volume.

    - Diamond is stable up to very high temperatures, but you need to protect it from air when it gets hot (or it will burn). So put it in a bottle with an inert gas just in case. Limitiing temperature structurally is about where it starts going over into graphite, so you can let it get up to a dull red glow (if your I/O is at some bluer color and that temperature doesn't create too much thermal noise).

    - How big can you get? Square-cube law limits your I/O-to-computation ratio, since the I/O is on four faces that go with the square of the linear dimension, the computation goes (approximately) with the volume, or the cube of the dimension. The cooling-to-gate ratio suffers a similar square-cube issue (plus a linear penalty for power losses from the internal distribution busses). You also have an interconnect penalty - as you get bigger you have to give a higher fraction of your volume to power and signal lines (or signal repeaters), but this actually improves the square-cube problems. Finally, construction time is about proportional to number of computational elements. So let's pull a number out of nowhere and say two meters on a side.

    Of course the punch line is what the device would look like:
    - A six-foot cube of diamond.
    - Glowing cherry red.
    - In a glass bottle of inert gas.
    - Supported by water-cooled silver bus bars.
    - And connected to everything else by an enormous number of glass fiber light-pipes.

    In other words, the kind of thing you'd expect to be the ship's brain in a late model Sklyark spacecraft, from one of George O. Smith's golden-age science fiction novels. B-)

    ====

    This rap was always entertainment rather than a serious proposal, and is no doubt buggy. For instance: I hear doping diamond is a bit problematic. And these days I'd suggest doing chip-under-construction powering and testing using physical contacts and JTAG fullscan or a variant of the CrossCheck array, rather than (or to suplement) the electron beams.

    But I hope the point is made that, for parallizable tasks at least, we still have a LONG way to go with improved geometry before we finally hit the wall.
  • by CMU_Nort ( 73700 ) on Wednesday December 11, 2002 @06:22PM (#4866319) Homepage
    It says that either processor speed (or density) will double every 18 months OR (and it's a big or) the price will halve in 18 months.

    So logically we could continue on with the same speed processor and just have them get progressively cheaper. But hmm, I wonder whose profit margins this would affect? What he's setting us up for is that Intel will refuse to lower their prices. They'll continue to make the chips cheaper and cheaper but they won't sell them for any less.

    I actually look forward to an end in ever increasing clock rates, because then we can all get back to programming skillfully and making tight efficient code.

1 + 1 = 3, for large values of 1.

Working...