Andy Grove Says End Of Moore's Law At Hand 520
Jack William Bell writes "Intel chief Andy Grove says Moore's Law has reached its limit. Pointing to current leaks in modern chips, he says -- "Current is becoming a major factor and a limiter on how complex we can build chips," said Grove. He said the company' engineers "just can't get rid of" power leakage. -- But, of course, this only applies to semiconductor chips, there is no guarantee that some other technology will not take over and continue the march of smaller, cheaper and faster processors. I remember people saying stuff like this years ago before MOSFET." Update: 12/11 22:01 GMT by T : Correction: the text above originally mangled Andy Grove's name as "Andy Moore."
"The End" (Score:4, Insightful)
So, back to Don Knuth's Books? (Score:5, Insightful)
However, may be better processor architectures and clusters will keep the march going.
Either way, I believe some progress would be made.
S
The End of Moore's Law (Score:4, Insightful)
Arrogant Intel (Score:2, Insightful)
Thank Godness! (Score:3, Insightful)
Measured by what? (Score:2, Insightful)
Well maybe... (Score:5, Insightful)
Maybe thats the way forward? Optimisations and improvements on the chips instead of raw clock speed....?
Re:Newton? (Score:3, Insightful)
Re:I guess it isn't a Law then (Score:2, Insightful)
Re:Well maybe... (Score:5, Insightful)
So, speed and feature size are as good as they're going to get, and they were easy to do. Now we can work on the hard stuff with the benefit of all the processor power we've got sitting around unused.
Don't optimize the hard stuff until you've optimized the easy stuff.
Depends how you define Moore's Law (Score:4, Insightful)
However, if you define Moore's law as computational capacity doubling every 18 months, than it is very unlikely to end. If you project back to well before integrated circuits, or the law itself, computational capacity has been growing at this same exponential rate for many decades - even back to the earliest mechanical based "computers". There will be something to replace the current paradigm; the paradigm has already changed numerous times without throwing off the exponential curve.
For a facinating look at this phenomenon at what it holds for the future, I'd recommend The Age of Spiritual Machines: When Computers Exceed Human Intelligence [amazon.com] by Ray Kurzweil.
Re:Well maybe... (Score:2, Insightful)
However, your last comment is still valid. If they can't squeeze any more out of the fab processes, then they'll have to work more on design optimation. But that would hold true if they had or hadn't engaged in a "stupid GHz battle."
Okay, not wasting any more time here...
Moors Law (Score:5, Insightful)
Re:Newton? (Score:2, Insightful)
If all you do is email and run excel, then don't get a top of the line PC.
I have a very small lawn (and a son). But I don't go around moaning that they should stop making better lawnmowers.
But then, if I was the type to be jealous because someone else has a better lawnmower (or faster computer) than me, I'd probably get all upset and jump up and down that I cant afford to constantly have the latest model.
This is consistent with the SIA roadmap (Score:5, Insightful)
That's about right. It's a bit more pessimistic than the SIA roadmap, but it's close. Grove was just stating, for a general audience, what's accepted in the semiconductor industry. Optical lithography on flat silicon comes to the end of its run within a decade. Around that point, atoms are too big, and there aren't enough electrons in each gate.
There's been a question of whether the limits of fabrication or the limits of device physics would be reached first. Grove apparently thinks the device problem dominates, since he's talking about leakage current. As density goes up, voltage has to go down, and current goes up. A Pentium 4 draws upwards of 30 amps at 1.2 volts. We're headed for hundreds of amps. It's hard to escape resistive losses with currents like that.
There are various other technologies that could lead to higher densities. But none of them are as cheap on a per-gate basis.
Re:Hmm.... (Score:3, Insightful)
Yes, we are all curious to see what the future holds for superconductors and smei-conductors (DUH). Yes, the two have nothing to do with each other in the context of this article. Superconductors can be used for transmition of signal, definitely important to computing, but not creation of logic; fundamentally, you need something that passes information in one direction under certain conditions but doesn't pass information when the same outside conditions are applied in the opposite direction (e.g. semi-conductive materials).
Moore's Law doesn't need to be revived yet. It still holds true. Will it fail eventually? absolutely. But if Grove could pull his head out of his ass and see the wood for the tree, he'd realize that it isn't going to happen soon. People stopped lauging at Quantum, Bio and other computing theories a long time ago. If you step back and look at the big picture, you'll see Moore's law happily marching along and geeks like us making it happen. Grove is just shouting "look at me! i'm talking about theory of computing but saying nothing!"
It seems to me.. (Score:3, Insightful)
Seriously, we've risen above much greater challenges than this..
It sorta sounds like Intel is about ready to quit trying to innovate, perhaps this is time for AMD to take the lead..
Re:Thank Godness! (Score:2, Insightful)
I don't understand the implications herein. First off, if you already have computers sufficient for the tasks at hand, why should the release of more powerful computers compel anyone, including schools, to upgrade them? Moore's Law is not a legal dictum that states you must buy a better, faster computer when it becomes available.
Secondly, Moore's Law actually enables schools to upgrade their equipment at bargain basement prices if they remain comfortably behind the power curve, and purchase boxes based on older CPU's for dirt cheap.
Today, in general, I think the home computer has gotten far more powerful than most people really need. Seriously, a Pentium 4 3Ghz computer for browsing the web and sending email? I'm a software developer and a lover of technology, and like many others before me I drool over newer faster computers when they arrive. But I still haven't found a compelling reason to upgrade my 300Mhz PII for five years now. I can browse the web, compile modest java and C++ programs in reasonable time, watch MPEG video clips, edit photos with Photoshop, etc. What, exactly, will kids in a school environment need to do that exceeds the capabilities of this "ancient" PC of mine? (Which, by the way, you can buy today for roughly the cost of an American History textbook.)
Well he *would* say that now (Score:3, Insightful)
Moore's Law is not dead. What is dead is the need for Moore's Law. I am not alone in noticing that, after 20 years of regular performance increases, things are now pretty good on the desktop, and excellent in the server room. Real changes now need to be in software and services. Further, high-performance computing is going the route of multiple cores per CPU, multiple CPUs per box, and clusters of boxes. The latter is probably the biggest innovation since Ethernet. So, who needs Moore's Law?
Intel and AMD know *all* this. They want out of the clock race, and yesterday. They want to get into the next level of things, which is defining services and uses for their existing products. They are seeing the end of the glamour years of the CPU and the rise of the era of information applicances, which *must* be portable. Users *will* be far more sensitive to battery life and perceptions of performance (latency and ease of use) and far less sensitive to theoretical performance measures.
Flame me if you like, but the geek appeal of personal computers is disappearing. Sure there will be people who fiddle with CPUs as a hobby, just as they did 30 years ago when the Apple computer was born to serve a small group of hobbyists. But is that the mainstream? Is that going to support Intel and AMD in their race? Are those companies going to promote a revolution in fab technology, to the tune of half a trillion dollars in investment and technology between them, just to support geeky hobbyists? They could, but they won't, because that is not the future. It is the past.
The future will still be interesting, mind you, but the challenge has changed. A phone that fits in your rear molar and runs off chemical reactions with your own saliva looks far more lucrative to these companies than a CPU that runs at 100Ghz and consumes as much power as an appartment complex.
Threshold Voltage (Score:5, Insightful)
You can fairly easily raise the threshold voltage (for a process). It makes the chip slower, but leaks less current (and therefore usually uses less power). This is one of the key elements of "Low Power" processes like CL013LP.
For more information, the Britney Spears' Guide to Semiconductor Physics [britneyspears.ac] is sure to help.
Interestingly, Using leaky transistors that switch faster has been a trick used for a very long time. One of the reasons the Cray computers took so much cooling was that they didn't use MOSFETs, their whole process was based on PNP and NPN junction transistors. For those who don't know much about transistors, FETs (or Field Effect Transistors) make a little capacitor that when you charge it up (or don't charge it up, depending), it lets current flow through on the other side. It takes a while to charge up the capacitor (time constant proportional to Resistance times Capacitance, remember!), but once it's charged there isn't any current (except the leakage current) that flows through.
At least, that's what I recall from my classes. I didn't do so well in the device physics and components classes.
Re:Short sighted, or just playing it safe? (Score:3, Insightful)
So joe sixpack won't be motivated to cough up the dough for the upgrade.
No money, no research, no new speed barriers broken.
Specialized markets (cgi movie production? weather modelling?) will require lots more horse power. But most corporate offices won't need it. In these cost-conscious times, that means they won't get it.
So the market for the new high-end processors will be much smaller. This will probably lead to a stratification of the CPU market. Like the difference between Celeron/Duron and P4/Athlon, but with a much bigger difference.
I read some where, maybe on slashdot, that what will push the next threshold of CPU processor speeds will be driven by the rise the accurate, real-time natural-language voice-recognition software. (and, along with it, language-to-language translation). That kind of processing requires lots of cycles, but has broad, not specialized, applications.
The exception, and possibly the hole, to this theory is games. But DOOM III looks pretty damn impressive. What hardware does it require?
Just idle speculation...
Um...no? (Score:3, Insightful)
I'd never put a limitation on this since somebody's going to come up with an idea to eek out more clocks.
Re:So, back to Don Knuth's Books? (Score:1, Insightful)
However, at the risk of unleashing Slashdot Hell, I have to say that putting more emphasis into code optimization != progress. Let me explain why: Progress involves more than just code optimization. It involves more than just speed, just power, or just usefulness. It involves these factors, and more. If everyone started making programs that were extremely fast and extremely powerful, but extremely useless, that's not progress. That's why the philosophy of "We can throw money at it", or "Next year computers will be twice as fast" exists -- because it works.
That isn't to say that optimization is useless; not at all. Just that it isn't the end-all solution.
Density is not everything (Score:4, Insightful)
We've now reached the stage where handheld devices have the same sort of processing power and memory of respectable desktops of a few years back, and I find it interesting that the sudden big hype is the tablet PC, which is relatively low speed but has good battery life. That could be the direction things are going, and if so it is hardly surprising Andy Grove is worried about leaking electrons, what with Transmeta, Via and Motorola/IBM having lower power designs.
A case in point about technology demonstrators. Someone mentioned aircraft. OK, how much faster have cars got since, say, 1904 when (I think) RR first appeared? Not an awful lot, actually. They are vastly more reliable, waterproof, use less fuel, handle better, are safer, and enormously cheaper in real terms BUT they go about the same speed from A to B and carry about as many people. And they are still made of steel and aluminum, basically the same stuff available in 1904.
This is far from a perfect analogy because, of course, the function of the computer keeps getting reinvented: it is applied to more and more jobs as it gets cheaper, more powerful, and more reliable. But it does point out that the end of Moore's law is not the end of research and development.
A quick note on Moore's Law (Score:3, Insightful)
This is not what Gordon Moore said. Moore's statement was based on transistor density. Indeed, perhaps we may not be able to cram transistors together as much in the not too distant future.
Does this mean that chips won't continue to get twice as fast every 18 months? It would surprise me if processors slowed down their rate of speed growth much this decade. As people begin playing with digital video on the desktop, as people write games that can actually push enough information to a GeForce4 FX to make it worth spending money on, people are still going to want faster and faster machines. And while AMD still exists as a competitor to Intel, even those people who don't really need a 7 GHz machine are going to find that that's what's available.
So while Moore's law, as it was stated, may be nearing its end, Moore's law, as it is usually spoken will probably stick around for a good while longer.
Moore Laws..? (Score:3, Insightful)
Regarding the natural world environment, you're correct, as I've seen some harsh criticism of the volume and toxicity of waste byproduct of semiconductor manufacturing. It's not so simple as, just add a little sand and some magic and voila! It's probably not reported so much because the wonders of innovation and heated competition make for more sexy news writing.
Something not mentioned much, but observed by more than a few grumbling parties, is the ever increasing size of code. My first encounter with this was upgrading from RSTS/E 7.? to 8.0, which was darn exciting back in the day, yet the size of the kernel would have been about 50% larger if we activated all the features *I* wanted to (and since I was the admin, lemme tellya, it was darn painful to trim off a few features I lusted after to squeeze it into our memory and performance target.) These days, it's often the OS, ever notice how Windows installs got to needing more space than your entire first harddisk? Common response seems to be, just throw more memory at it. Yet, I think there's a Moore-like law with versions of Windows, i.e. every 2 years a new version comes out with twice as much code.
With physical limitation of the current components nearing the top of the "rate of declinging return" curve, poor performance of the software will eventually catch up with users expectations. Thus, leaner, faster code could become a market direction.
"** NEW: Office-O-Lux, With 50% less redundant code! ***"
Re:Short sighted, or just playing it safe? (Score:3, Insightful)
Also palladium is only significant when you come to palladium-protected content. It will have no effect whatsoever on your ability to rip DVDs because any palladium-protected DVD wouldn't be DVD compliant and wouldn't work in your DVD player. The next public video format is YEARS away and by that time palladium will likely only be an unhappy memory, though it may be supplanted by some other hardware DRM scheme.
Think about this: A boating video game which uses computational fluid dynamics to determine how the water should behave. Or how about a racing game so realistic that you model chassis flex and installing subframe connectors actually makes a perceptible difference in handling?
Also we are nowhere near movie quality real time rendering. We need an increase in the number of polygons of several orders of magnitude to really get there, or a revolution in the methods of rendering curved surfaces. There's practically no items in my apartment that don't require either a ton of polygons or a ton of curved surfaces to actually reach the point of photorealism. In addition to actually reach the point of reality some surfaces will have to be rendered with some actual depth. In other words, you will have to render not just the surface of someone's skin, but a certain depth of it to get a 100% realistic representation. Each wrinkle and crease must be rendered at a very high resolution to have them all move properly... Do you see where I'm going with this?
There will always be a demand for more processing power. As we get more of it on the desktop, operations which formerly required supercomputers (or at least mainframes) will become commonplace. Anyone remember when Visio 2000 came out using M$ SQL server for a backend? Once upon a time you needed a "server class" system just to run a RDBMS, now we use them to store miscellaneous data. (IIRC mysql was a standard part of linux life before visio enterprise 2000 came out but I'm trying to put it in the corporate perspective.)
Don't forget what Moore's law really says (Score:3, Insightful)
So logically we could continue on with the same speed processor and just have them get progressively cheaper. But hmm, I wonder whose profit margins this would affect? What he's setting us up for is that Intel will refuse to lower their prices. They'll continue to make the chips cheaper and cheaper but they won't sell them for any less.
I actually look forward to an end in ever increasing clock rates, because then we can all get back to programming skillfully and making tight efficient code.
Re:Short sighted, or just playing it safe? (Score:3, Insightful)
As for disk space, similar argument applies. The more space we have the more we will fill. We will have increasingly better quality films, higher framerates, etc, up until the point where we are recording details from many different angles with miniature cameras, and keeping the data nicely formatted and referenced for database use. Our needs will scale with the technology. We are always hoping for just a little more, and after that we see the next stone to hop to.
So that I'm not completely critical, the home user will find little reason to upgrade, as indeed they already do. But I'd say this has always been the case. Average Joe likes to get the fastest PC and the best DVD player, but he only wants to upgrade every so many years. Whereas scientists, gamers, hobbyists, etc, like to update regularly to take advantage of new advances that they can use immediately. So I'd say the cycle will continue much the same.
Re:Hmm.... (Score:3, Insightful)
Superconduction is overkill at this stage of the game.
More efficient cooling technologies (liquid cooling, peltiers, etc) could keep Moore's Law alive an extra 5 years. The primary difficulty today is back-inductance. All the current in those tiny wires creates magnetic fields that resist the original current flow(this is why chips get so hot). As we all know, the cooler the chip the faster it can run.(This is because there's less back inductance at lower temperatures, superconduction being the optimal case).
Anyhow, once current fab processes reach the wall, cooling technologies will probably have several good years of improvement that will directly enchance chip performance. That gives us a little more time to research new approaches(optical computing is probably the next step).