Andy Grove Says End Of Moore's Law At Hand 520
Jack William Bell writes "Intel chief Andy Grove says Moore's Law has reached its limit. Pointing to current leaks in modern chips, he says -- "Current is becoming a major factor and a limiter on how complex we can build chips," said Grove. He said the company' engineers "just can't get rid of" power leakage. -- But, of course, this only applies to semiconductor chips, there is no guarantee that some other technology will not take over and continue the march of smaller, cheaper and faster processors. I remember people saying stuff like this years ago before MOSFET." Update: 12/11 22:01 GMT by T : Correction: the text above originally mangled Andy Grove's name as "Andy Moore."
Andy Moore? (Score:5, Informative)
Re:Andy Moore? (Score:3, Funny)
Yes... -- was Re:Andy Moore? (Score:4, Informative)
Re:Yes... -- was Re:Andy Moore? (Score:5, Funny)
Well, don't worry. I'm sure one of the crack Slashdot editors will just go in and fix... oh yeah.
Re:Yes... -- was Re:Andy Moore? (Score:3, Funny)
Usually when folks say "crack", they mean "elite", not as a description of what they must be smoking...
Re:Yes... -- was Re:Andy Moore? (Score:3, Funny)
Re:Andy Moore? (Score:5, Informative)
Re:Andy Moore? (Score:3, Funny)
No No No No No! It's Gordon Moore's dorky brother
Andy Moore [cmu.edu]
Moore's Law (Score:2, Interesting)
Re:Moore's Law (Score:4, Funny)
Re:Moore's Law (Score:2)
Kinda like someone driving to an environmental protest in a yukon..
Moore Laws..? (Score:3, Insightful)
Regarding the natural world environment, you're correct, as I've seen some harsh criticism of the volume and toxicity of waste byproduct of semiconductor manufacturing. It's not so simple as, just add a little sand and some magic and voila! It's probably not reported so much because the wonders of innovation and heated competition make for more sexy news writing.
Something not mentioned much, but observed by more than a few grumbling parties, is the ever increasing size of code. My first encounter with this was upgrading from RSTS/E 7.? to 8.0, which was darn exciting back in the day, yet the size of the kernel would have been about 50% larger if we activated all the features *I* wanted to (and since I was the admin, lemme tellya, it was darn painful to trim off a few features I lusted after to squeeze it into our memory and performance target.) These days, it's often the OS, ever notice how Windows installs got to needing more space than your entire first harddisk? Common response seems to be, just throw more memory at it. Yet, I think there's a Moore-like law with versions of Windows, i.e. every 2 years a new version comes out with twice as much code.
With physical limitation of the current components nearing the top of the "rate of declinging return" curve, poor performance of the software will eventually catch up with users expectations. Thus, leaner, faster code could become a market direction.
"** NEW: Office-O-Lux, With 50% less redundant code! ***"
Other materials (Score:3)
I think they will just move away from silicon. Perhaps we have reached the limits of silicon but their is lots of research being done by acedamia and chip manufacturers on other materials.
Re:Other materials (Score:2)
I dont think semiconductors are going anywhere any time soon, because there is no viable technology that I am aware of to replace them. When we see an alternative form of processing born we can start the countdown to the end of semiconductors.
I did a paper a while back on Optical based systems using optically controlled polarization rotators, filters, and the like to do binary and trinary logic but the loss and size of such devices is huge.
Re:Other materials (Score:3, Informative)
Interesting to note, though, that while a germanium PN junction only has a voltage drop of 0.3V, silicon has a drop of 0.7V. Anyone know what the voltage drop would be for a carbon junction?
Also, one of the main reasons they switched from germanium to silicon was silicon's greater endurance to physical stress. I'm pretty sure diamond will be still stronger, despite the doping.
Maybe, just maybe, they'll be able to use channels in the diamond crystal as optic conductors. Considering crystalline Si is opaque, that would be a huge advantage. Wouldn't it be great if your clock signal was represented as a flash of light through the entire die? (Have to worry about reflection off the sides, though. Hmm.)
Anybody else have thoughts or knowledge?
Re:Other materials (Score:4, Interesting)
If Chip design is at its limit for reduction, then other factors an still come into play. Parallelization and multiprocessing coming to mind. Multiprocessing hasn't reached any type of limit. As chipsets improve, and CPUs play better together, then overall computing power can continue to increase. (Yeah, all you geeks go on and tell me how multiprocessing isn't really doubling and is not as optimized, yadda yadda).
The point is, CPU reduction is not the only path to processing power. It has just been the easiest so far. Watch for other paths to be optimized and utilized as this option peters out.
Hmm.... (Score:4, Interesting)
Course, that's probably 15 years away...
Re:Hmm.... (Score:3, Insightful)
Yes, we are all curious to see what the future holds for superconductors and smei-conductors (DUH). Yes, the two have nothing to do with each other in the context of this article. Superconductors can be used for transmition of signal, definitely important to computing, but not creation of logic; fundamentally, you need something that passes information in one direction under certain conditions but doesn't pass information when the same outside conditions are applied in the opposite direction (e.g. semi-conductive materials).
Moore's Law doesn't need to be revived yet. It still holds true. Will it fail eventually? absolutely. But if Grove could pull his head out of his ass and see the wood for the tree, he'd realize that it isn't going to happen soon. People stopped lauging at Quantum, Bio and other computing theories a long time ago. If you step back and look at the big picture, you'll see Moore's law happily marching along and geeks like us making it happen. Grove is just shouting "look at me! i'm talking about theory of computing but saying nothing!"
Easy Answer (Score:3, Funny)
Re:Hmm.... (Score:3, Insightful)
Superconduction is overkill at this stage of the game.
More efficient cooling technologies (liquid cooling, peltiers, etc) could keep Moore's Law alive an extra 5 years. The primary difficulty today is back-inductance. All the current in those tiny wires creates magnetic fields that resist the original current flow(this is why chips get so hot). As we all know, the cooler the chip the faster it can run.(This is because there's less back inductance at lower temperatures, superconduction being the optimal case).
Anyhow, once current fab processes reach the wall, cooling technologies will probably have several good years of improvement that will directly enchance chip performance. That gives us a little more time to research new approaches(optical computing is probably the next step).
Re:Hmm.... (Score:5, Funny)
I remember... (Score:4, Funny)
So. we'll see. I wonder if it now starts applying to graphic cards.
Re:I remember... (Score:4, Funny)
*ducks*
15% ! (Score:5, Funny)
No wonder my laptop only gets about a hour of runtime on its battery.
Re:15% ! (Score:2)
Re:15% ! (Score:2)
Well... (Score:2, Redundant)
Re:Well... (Score:2)
As transistors get smaller fewer electrons are used to trigger it, but when the number get low enough that the quantum behavior of the electron is a factor the current model can not be expanded any more..
I could be wrong, just my 2 cents..
Great! (Score:5, Funny)
So, this means that anything that possibly can go wrong no longer will! Hey, I'm all for that!
What? Moore's Law? Oh. Nevermind.
Re:Great! -Murphy (Score:3, Funny)
"If Murphy's Law can go wrong, it will."
Short sighted, or just playing it safe? (Score:5, Interesting)
It might help the company if expectations for new CPUs aren't higher than what they can produce.
Personally, my vote goes for optical CPUs as the wave of the future. Larger than curent CPUs might not be a problem if they don't put off much heat.
Re:Short sighted, or just playing it safe? (Score:4, Interesting)
Sounds likely. AMD have been saying - and demonstrating - for years that clock speed isn't the whole story.
Also, we're just not finding compelling applications to drive upgrade cycles in the home and office. We have a few years until we reach movie quality real time rendering, and after that, what do we need more speed for? If AMD and Intel are gambling on the mass market wanting to perform ever faster ripping of movies and audio, they'd better stop supporting Palladium, hadn't they?
Re:Short sighted, or just playing it safe? (Score:3, Insightful)
Also palladium is only significant when you come to palladium-protected content. It will have no effect whatsoever on your ability to rip DVDs because any palladium-protected DVD wouldn't be DVD compliant and wouldn't work in your DVD player. The next public video format is YEARS away and by that time palladium will likely only be an unhappy memory, though it may be supplanted by some other hardware DRM scheme.
Think about this: A boating video game which uses computational fluid dynamics to determine how the water should behave. Or how about a racing game so realistic that you model chassis flex and installing subframe connectors actually makes a perceptible difference in handling?
Also we are nowhere near movie quality real time rendering. We need an increase in the number of polygons of several orders of magnitude to really get there, or a revolution in the methods of rendering curved surfaces. There's practically no items in my apartment that don't require either a ton of polygons or a ton of curved surfaces to actually reach the point of photorealism. In addition to actually reach the point of reality some surfaces will have to be rendered with some actual depth. In other words, you will have to render not just the surface of someone's skin, but a certain depth of it to get a 100% realistic representation. Each wrinkle and crease must be rendered at a very high resolution to have them all move properly... Do you see where I'm going with this?
There will always be a demand for more processing power. As we get more of it on the desktop, operations which formerly required supercomputers (or at least mainframes) will become commonplace. Anyone remember when Visio 2000 came out using M$ SQL server for a backend? Once upon a time you needed a "server class" system just to run a RDBMS, now we use them to store miscellaneous data. (IIRC mysql was a standard part of linux life before visio enterprise 2000 came out but I'm trying to put it in the corporate perspective.)
Re:Short sighted, or just playing it safe? (Score:3, Insightful)
As for disk space, similar argument applies. The more space we have the more we will fill. We will have increasingly better quality films, higher framerates, etc, up until the point where we are recording details from many different angles with miniature cameras, and keeping the data nicely formatted and referenced for database use. Our needs will scale with the technology. We are always hoping for just a little more, and after that we see the next stone to hop to.
So that I'm not completely critical, the home user will find little reason to upgrade, as indeed they already do. But I'd say this has always been the case. Average Joe likes to get the fastest PC and the best DVD player, but he only wants to upgrade every so many years. Whereas scientists, gamers, hobbyists, etc, like to update regularly to take advantage of new advances that they can use immediately. So I'd say the cycle will continue much the same.
Re:Short sighted, or just playing it safe? (Score:5, Interesting)
I'm more hopeful that we might get away from the whole stupid clock idea and go asynchronos. This area seems to be opening up more and more. It's beena round for ever but nobody could find a reason to go to the extra expense.
If Moores law fails then I guess SMP will become mainstream. I mean it's either that or software engineers write programs that are efficient. I expect to see an aerobatic display by flying pigs before I see an efficient program.
Re:Short sighted, or just playing it safe? (Score:3, Interesting)
one physicist brought down 200 years of Physics research.
Re:Short sighted, or just playing it safe? (Score:3, Interesting)
Of course I do agree that an asynchronous architecture makes more sense in some ways but I should think it would increase the complexity of programming for the system.
In the short term I think solutions like intel's hyperthreading (only not half assed) are the answer. I think AMD is in a unique position where it could implement an honest to goodness 2-way SMP on a single chip because of the way clawhammer is laid out, which is to say that it uses hypertransport. As we know, hyperthreading provides only small performance enhancements compared to actual SMP.
Re:Short sighted, or just playing it safe? (Score:3, Interesting)
Of course, there are always simpler operations that can get done a bit faster -- but as wire delay gets worse and transistors switch faster, routing information is becoming much more critical than computational delay. Calculation is pretty cheap; forwarding is expensive.
Re:Short sighted, or just playing it safe? (Score:3, Insightful)
So joe sixpack won't be motivated to cough up the dough for the upgrade.
No money, no research, no new speed barriers broken.
Specialized markets (cgi movie production? weather modelling?) will require lots more horse power. But most corporate offices won't need it. In these cost-conscious times, that means they won't get it.
So the market for the new high-end processors will be much smaller. This will probably lead to a stratification of the CPU market. Like the difference between Celeron/Duron and P4/Athlon, but with a much bigger difference.
I read some where, maybe on slashdot, that what will push the next threshold of CPU processor speeds will be driven by the rise the accurate, real-time natural-language voice-recognition software. (and, along with it, language-to-language translation). That kind of processing requires lots of cycles, but has broad, not specialized, applications.
The exception, and possibly the hole, to this theory is games. But DOOM III looks pretty damn impressive. What hardware does it require?
Just idle speculation...
sure sure... (Score:4, Funny)
Re:sure sure... (Score:4, Interesting)
- Grove said basically the same thing you said- if better insulators or other technologies aren't developed, Moore's Law could become "redundant" in 10 years.
- That said, there are other ways to increase chip performance other than increasing transistor density according to Moore's law. Grove cites a few of them in that article (more efficient transistors, multiple cores, etc). So you will still be able to play the latest Quake in 10 years.
Re:sure sure... (Score:5, Interesting)
Of course, I think something else will pop up (like the aforementioned optoelectronic switch, perhaps), since companies are resourceful folks. Academia is good about researching ways to reduce current leakage, and my prof says high-K dielectric insulators are a good way to reduce leakage through the gate. Whatever...something will come up.
My point is that the situation now is a lot more physically complex than that of, say, 1989 or something, where the limitation was "we can't go past 100 MHz because we haven't thought of a way to do it!" Now it's more "we can't go past [whatever]Ghz because of goddamn physics!"
By the way, anyone else think Gordon Moore gets a little too much by having a "law" named after him? I mean, sheesh...all he did was draw a freakin' best-fit curve on a plot of easily-found data. And on top of that, Moore's Law isn't a law at all...it's a statistic.
Re:sure sure... (Score:3, Interesting)
I think it is exceedingly likely that there will be advances in materials science and manufacturing that will prolong the validity of Moore's Law. It continues to be feasible to decrease core voltages, and newer heat-removal technologies and better dielectrics are showing promise. Even if each avenue provides only a linear reduction in dissipation, or a linear increase in our ability to deal with it, the end result is that the synergy allows us to eke out a few more years of exponential growth.
Lather, rinse, repeat.
"The End" (Score:4, Insightful)
Re:"The End" (Score:5, Funny)
Great! (Score:5, Funny)
So, back to Don Knuth's Books? (Score:5, Insightful)
However, may be better processor architectures and clusters will keep the march going.
Either way, I believe some progress would be made.
S
Re:So, back to Don Knuth's Books? (Score:2, Funny)
Why not do like M$? (Score:2)
I believe the SOP at M$ is to take the result of the above and the run it through the optimizer again. Usualy this results in a 5-7% speedup.
According to most sources the plan for LongHorn is to at the end run it through the optimizer one more time. They think that this could net another 2-3%. We'll see.
Re:So, back to Don Knuth's Books? (Score:3, Interesting)
Try structuring the data better, and you will go far.
Yep, O(log n) will be king again (Score:4, Interesting)
Interestingly, the available memory will continue to grow, so we might end up structuring our data structures so that access time will be minimal. That is - our data structures will continue to change focus from compactness to raw speed. And big O analysis is part of that picture.
I think we'll see some interesting things happen with fiber technology, though. When those envisioned optimal silicone chips become commonplace and thus really cheap, all appliances might run on them, and thus make it feasible to distribute your processing between your computer, your fridge and your iron. We'll just interconnect everything - perhaps a new fibre connector in our electricity plugs.
Newton? (Score:5, Funny)
BTW do most of the users really need fast machines? I can do all my work without any problems on my 333Mhz PII
CU
Re:Newton? (Score:3, Funny)
Yes. Better, faster, cheaper.
>> can do all my work without any problems on my 333Mhz PII
And you could probably ride a horse to work, too. So what?
Re:Newton? (Score:3, Insightful)
Re:Newton? (Score:3, Informative)
so sorry newtons laws are already old
Shucks... (Score:5, Funny)
Re:A bit overestimated (Score:3, Informative)
Anyway taking your comparison and using a benchmark of the time (the Norton System info benchmark [tripod.com]):
80286-16 got a 9.9 (i.e. 9.9x as fast as the XT)
80386-20 got a 17.5
More importantly the cache configurations that came with the 80386-25 raised the score to a 26.7
adjusting for the increase in mhz:
26.7 * 16 / 25 = 17 which is close to double.
I'll stand by my statement.
On a related note ... (Score:3, Funny)
Intel stock goes down like 50%
The End of Moore's Law (Score:4, Insightful)
Breaking the law, breaking the law... (Score:4, Funny)
I expect the Feds to start handing out stiff penalties to processor manufacturers who fail to meet the law's demands.
DAMN IT! (Score:2, Funny)
I guess it isn't a Law then (Score:4, Interesting)
Re:I guess it isn't a Law then (Score:2, Insightful)
Re:I guess it isn't a Law then (Score:3, Interesting)
"Postulate" I can agree to, but "axiom"? As in something obviously and nesessecarily true??
Myself, I would coin it "Moore's projection"
Well, possibly... (Score:2, Informative)
Just recently I attended a seminar by a Cambridge lecturer discussing the performance benefits of quantum computing - 1/n*root(n) maximum search relationship for unsorted lists, which seems silly - but thats just quantum stuff for you - who knows, maybe it'll be the next jump to break against Moore's law. Does still look like its a while off though.
Arrogant Intel (Score:2, Insightful)
Re:Arrogant Intel (Score:2, Interesting)
He said the company' engineers "just can't get rid of" power leakage.
Sounds to me like he is just saying Intel hasn't solved it yet (but neither has anybody else).
Thank Godness! (Score:3, Insightful)
Measured by what? (Score:2, Insightful)
Well maybe... (Score:5, Insightful)
Maybe thats the way forward? Optimisations and improvements on the chips instead of raw clock speed....?
Re:Well maybe... (Score:5, Insightful)
So, speed and feature size are as good as they're going to get, and they were easy to do. Now we can work on the hard stuff with the benefit of all the processor power we've got sitting around unused.
Don't optimize the hard stuff until you've optimized the easy stuff.
Moore's Law Applies to Stories Like This (Score:4, Funny)
Chicken little (Score:2)
It may be true that the current chip technology has reached its end, no more progress possible. But believing that's "the end" is shortsighted. There has always been yet another way to see the law complied with. I do not doubt we will again this time. Be it optical, asynchronous logic, new materials, or whatever, it will probably happen.
It's not time to call Moore's law dead just yet.
Again? (Score:2)
Then again, they said it woudl be impossible to make semiconductors using geometries of less than 1 micron; they said that 8x was the fastest a CDROM could ever hope to read; they said that 14,400 baud was the fastest the telephone system could handle; and so on.
They were all wrong, just as Mr Grove most likely will be.
Still, I suppose if you prophecy doom often enough, you will eventually be right!
What will Joe Sixpack do? (Score:2)
In Soviet Russia, Moore's Law ends YOU!
Moore's law is all about transistor density (Score:4, Interesting)
So we're running out of ways to pack more and more transistors into a device. There's still a ton of room to improve the layout of those transistors, the world is full of whines about x86 architecture.
This doesnt mean 'computers are as good as they're going to get', it just means the fabrication plants are as good as they're going to get.
Off Topic (Slightly)... (Score:2)
Finally an American CEO that understands the problems of shifting operations overseas.
We are definetly mortgaging the future of our children for today's short-term buck. Far too many businesses are willing to sell their souls to the people that could one day go to war with the US.
[ More Quotes Like This ] (Score:5, Interesting)
But what
- Engineer at the Advanced Computing Systems Division of IBM, 1968, commenting on the microchip.
I think there is a world market for maybe five computers.
- Thomas Watson, chairman of IBM, 1943.
What can be more palpably absurd than the prospect held out of locomotives traveling twice as fast as stagecoaches?
- The Quarterly Review, England (March 1825)
The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it. . . . Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient.
- Dr. Alfred Velpeau (1839) French surgeon
Men might as well project a voyage to the Moon as attempt to employ steam navigation against the stormy North Atlantic Ocean.
- Dr. Dionysus Lardner (1838) Professor of Natural Philosophy and Astronomy, University College, London
The foolish idea of shooting at the moon is an example of the absurd length to which vicious specialization will carry scientists working in thought-tight compartments.
- A.W. Bickerton (1926) Professor of Physics and Chemistry, Canterbury College, New Zealand
[W]hen the Paris Exhibition closes electric light will close with it and no more be heard of.
- Erasmus Wilson (1878) Professor at Oxford University
Well informed people know it is impossible to transmit the voice over wires and that were it possible to do so, the thing would be of no practical value.
- Editorial in the Boston Post (1865)
That the automobile has practically reached the limit of its development is suggested by the fact that during the past year no improvements of a radical nature have been introduced.
- Scientific American, Jan. 2, 1909
Heavier-than-air flying machines are impossible.
- Lord Kelvin, ca. 1895, British mathematician and physicist
Radio has no future
- Lord Kelvin, ca. 1897.
While theoretically and technically television may be feasible, commercially and financially I consider it an impossibility, a development of which we need waste little time dreaming.
- Lee DeForest, 1926 (American radio pioneer)
There is not the slightest indication that [nuclear energy] will ever be obtainable. It would mean that the atom would have to be shattered at will.
- Albert Einstein, 1932.
Where a calculator on the ENIAC is equipped with 19,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps only weigh 1.5 tons.
- Popular Mechanics, March 1949.
(Try the laptop version!)
There is no need for any individual to have a computer in their home.
- Ken Olson, 1977, President, Digital Equipment Corp.
I have traveled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won't lastout the year.
- The editor in charge of business books for Prentice Hall, 1957.
[Quotes from this page [athenet.net].]
Re:[ More Quotes Like This ] (Score:5, Interesting)
From 1903 up until that point, aircraft design was on a curve almost impressive as Moore's law. In the 1960s, the rate of improvement hit a wall, and there have only been small incremental improvments since then. (And much of that has been achieved by "cheating": glomming onto Moore's law by cramming electronics into the aircraft.)
Electronics technology is bound to hit a similar limit of economically feasible improvments sooner or later.
Re:[ More Quotes Like This ] (Score:4, Informative)
Gates is supposed to have said, "640K should be enough for anyone." The remark became the industry's equivalent of "Let them eat cake" because it seemed to combine lordly condescension with a lack of interest in operational details. After all, today's ordinary home computers have one hundred times as much memory as the industry's leader was calling "enough."
It appears that it was Marie Thérèse, not Marie Antoinette, who greeted news that the people lacked bread with qu'ils mangent de la brioche. (The phrase was cited in Rousseau's Confessions, published when Marie Antoinette was thirteen years old and still living in Austria.) And it now appears that Bill Gates never said anything about getting along with 640K. One Sunday afternoon I asked a friend in Seattle who knows Gates whether the quote was accurate or apocryphal. Late that night, to my amazement, I found a long e-mail from Gates in my inbox, laying out painstakingly the reasons why he had always believed the opposite of what the notorious quote implied. His main point was that the 640K limit in early PCs was imposed by the design of processing chips, not Gates's software, and he'd been pushing to raise the limit as hard and as often as he could. Yet despite Gates's convincing denial, the quote is unlikely to die. It's too convenient an expression of the computer industry's sense that no one can be sure what will happen next.
In unrelated news... (Score:2)
"It will be a boon to our company," said Grove. "Consumers like more G,H, and z's and investors like more money!"
Process technology (Score:2)
There should be a law... (Score:3, Redundant)
Depends how you define Moore's Law (Score:4, Insightful)
However, if you define Moore's law as computational capacity doubling every 18 months, than it is very unlikely to end. If you project back to well before integrated circuits, or the law itself, computational capacity has been growing at this same exponential rate for many decades - even back to the earliest mechanical based "computers". There will be something to replace the current paradigm; the paradigm has already changed numerous times without throwing off the exponential curve.
For a facinating look at this phenomenon at what it holds for the future, I'd recommend The Age of Spiritual Machines: When Computers Exceed Human Intelligence [amazon.com] by Ray Kurzweil.
Moors Law (Score:5, Insightful)
This is consistent with the SIA roadmap (Score:5, Insightful)
That's about right. It's a bit more pessimistic than the SIA roadmap, but it's close. Grove was just stating, for a general audience, what's accepted in the semiconductor industry. Optical lithography on flat silicon comes to the end of its run within a decade. Around that point, atoms are too big, and there aren't enough electrons in each gate.
There's been a question of whether the limits of fabrication or the limits of device physics would be reached first. Grove apparently thinks the device problem dominates, since he's talking about leakage current. As density goes up, voltage has to go down, and current goes up. A Pentium 4 draws upwards of 30 amps at 1.2 volts. We're headed for hundreds of amps. It's hard to escape resistive losses with currents like that.
There are various other technologies that could lead to higher densities. But none of them are as cheap on a per-gate basis.
Mo(o)re or less? (Score:3, Interesting)
-Of course the ultimate limit of a 1 atom transistor, can't remember the date this would occur
-Limited speed of signals acros the chip: If the clock frequency gets much larger a signal would require several buffer stages to reach the other side.
-Capacity of wires gets more important: the interconnects don't scale at the same pace as the transistors. Their finite capacity limits clock speeds
Some non-technical reasons:
-Increasing costs of new fabrication processes: each new increment is more expensive.
-Limited manpower to design circuits with more and more transistors. This probably means that a larger area of the chips will consist of 'dumb' circuits like cache.
It seems to me.. (Score:3, Insightful)
Seriously, we've risen above much greater challenges than this..
It sorta sounds like Intel is about ready to quit trying to innovate, perhaps this is time for AMD to take the lead..
Apropos links (Score:3, Informative)
This has to be a classic quote (Score:3, Funny)
Duh! Funny, I have never seen any (properly connected) microprocessor chip generating much in the way of light , sound, or X-rays. I suppose a teensy weensy amount might go off as RF emissions, but not from the DC leakage current.
Well he *would* say that now (Score:3, Insightful)
Moore's Law is not dead. What is dead is the need for Moore's Law. I am not alone in noticing that, after 20 years of regular performance increases, things are now pretty good on the desktop, and excellent in the server room. Real changes now need to be in software and services. Further, high-performance computing is going the route of multiple cores per CPU, multiple CPUs per box, and clusters of boxes. The latter is probably the biggest innovation since Ethernet. So, who needs Moore's Law?
Intel and AMD know *all* this. They want out of the clock race, and yesterday. They want to get into the next level of things, which is defining services and uses for their existing products. They are seeing the end of the glamour years of the CPU and the rise of the era of information applicances, which *must* be portable. Users *will* be far more sensitive to battery life and perceptions of performance (latency and ease of use) and far less sensitive to theoretical performance measures.
Flame me if you like, but the geek appeal of personal computers is disappearing. Sure there will be people who fiddle with CPUs as a hobby, just as they did 30 years ago when the Apple computer was born to serve a small group of hobbyists. But is that the mainstream? Is that going to support Intel and AMD in their race? Are those companies going to promote a revolution in fab technology, to the tune of half a trillion dollars in investment and technology between them, just to support geeky hobbyists? They could, but they won't, because that is not the future. It is the past.
The future will still be interesting, mind you, but the challenge has changed. A phone that fits in your rear molar and runs off chemical reactions with your own saliva looks far more lucrative to these companies than a CPU that runs at 100Ghz and consumes as much power as an appartment complex.
Threshold Voltage (Score:5, Insightful)
You can fairly easily raise the threshold voltage (for a process). It makes the chip slower, but leaks less current (and therefore usually uses less power). This is one of the key elements of "Low Power" processes like CL013LP.
For more information, the Britney Spears' Guide to Semiconductor Physics [britneyspears.ac] is sure to help.
Interestingly, Using leaky transistors that switch faster has been a trick used for a very long time. One of the reasons the Cray computers took so much cooling was that they didn't use MOSFETs, their whole process was based on PNP and NPN junction transistors. For those who don't know much about transistors, FETs (or Field Effect Transistors) make a little capacitor that when you charge it up (or don't charge it up, depending), it lets current flow through on the other side. It takes a while to charge up the capacitor (time constant proportional to Resistance times Capacitance, remember!), but once it's charged there isn't any current (except the leakage current) that flows through.
At least, that's what I recall from my classes. I didn't do so well in the device physics and components classes.
Um...no? (Score:3, Insightful)
I'd never put a limitation on this since somebody's going to come up with an idea to eek out more clocks.
Density is not everything (Score:4, Insightful)
We've now reached the stage where handheld devices have the same sort of processing power and memory of respectable desktops of a few years back, and I find it interesting that the sudden big hype is the tablet PC, which is relatively low speed but has good battery life. That could be the direction things are going, and if so it is hardly surprising Andy Grove is worried about leaking electrons, what with Transmeta, Via and Motorola/IBM having lower power designs.
A case in point about technology demonstrators. Someone mentioned aircraft. OK, how much faster have cars got since, say, 1904 when (I think) RR first appeared? Not an awful lot, actually. They are vastly more reliable, waterproof, use less fuel, handle better, are safer, and enormously cheaper in real terms BUT they go about the same speed from A to B and carry about as many people. And they are still made of steel and aluminum, basically the same stuff available in 1904.
This is far from a perfect analogy because, of course, the function of the computer keeps getting reinvented: it is applied to more and more jobs as it gets cheaper, more powerful, and more reliable. But it does point out that the end of Moore's law is not the end of research and development.
A quick note on Moore's Law (Score:3, Insightful)
This is not what Gordon Moore said. Moore's statement was based on transistor density. Indeed, perhaps we may not be able to cram transistors together as much in the not too distant future.
Does this mean that chips won't continue to get twice as fast every 18 months? It would surprise me if processors slowed down their rate of speed growth much this decade. As people begin playing with digital video on the desktop, as people write games that can actually push enough information to a GeForce4 FX to make it worth spending money on, people are still going to want faster and faster machines. And while AMD still exists as a competitor to Intel, even those people who don't really need a 7 GHz machine are going to find that that's what's available.
So while Moore's law, as it was stated, may be nearing its end, Moore's law, as it is usually spoken will probably stick around for a good while longer.
What may be coming... (Score:4, Interesting)
Honestly, I think a bigger trend will be to take advantage of formalisms that let developers develop more reliable and stable software. Now, I know and you know that things like functional programming have been out there for years, and haven't succeeded because first, they were too slow and therefore wasted too many processor cycles. This is obviously much less of a problem now - Java "wastes" lots of processor cycles, but for a lot of software needs, saves so many human "thinking" cycles that it pays off in spades for businesses that need business or enterprise software to Do Stuff for the back-end sides of industry.
So what big problem(s) are left in the software world? Well, people still bitch about how fucking unreliable most software is. In particular, core, critical system areas, like the interface between hardware and software - as more hardware is out there, and more drivers are developed, and backwards compatibility is an issue, hardware interactions have not become substantially more reliable. And frankly a lot of applications themselves, have become substantially less reliable - the big problem is that adding features and changing GUIs seems to break too many things and introduce too many potential problems (look at Outlook XP vs. Outlook 2000 - fixed some security holes, made a prettier GUI, and made the damn thing crash all the time).
Look at a lot of the academic work being done in computer science, especially in programming language design, operating system design, parallel algorithms and parallel languages. Sometimes researchers head off down dead-end paths, but sometimes they have it right, and it just takes a while for industry to see what they need this stuff for. That being said, it'll always be cheaper to teach people "Programming in Java 101" in India and then hire 1000 of them to hack away at code, admitted usually for the most uninteresting and repetitive types of development work (at least, this will hold until economic parity in the third world becomes a reality).
I wonder... (Score:3, Interesting)
Read one post earlier in which the poster thought AMD was abdicating a "clock speed" race. Obviously, this sentiment, among so many like it, comes from Hector Ruiz's speech last week in which he said that AMD wasn't going to do "technology for technology's sake." I wish Hector had made himself a bit clearer...;)
What I think he meant was that unlike Intel with Itanium, AMD was not going to design brand-new technologies with no practical worth simply for the sake of performance (because Itanium has no software it's very nearly useless--except for doing PR benchmarks for Intel.) That's why AMD chose to do x86-64--because it is technology for practicality's sake. That's my take on that statement.
Also, AMD has been out of the "clock race" ever since they designed the K7. The race AMD wants to win, and has been winning, is the "performance race" which doesn't depend on raw MHz. Any P4 will be much slower than any K7, when clocked at the same MHz speed. That's why AMD's been using performance ratings--because they are much better measures of performance than mere MHz speeds could ever be between competing cpus with differing architectures.
Even once the switches reach the physical limit... (Score:3, Interesting)
But there's more that can be done - in terms of geometry and organization.
Current chips are a single two-dimensional array of components (or sometimes a small number of layers). But build your gates and interconnects in 3-D and you can go farther on two fronts:
- Speeding up the individual functions a bit further. (The more complex, the more improvement).
- Combining a LARGE nubmer of parallel elements into a small space (so they can talk to each other quickly).
Back in the '70s I had a rap I'd do called "preposterous scale integration". Basic idea:
- Use diamond for the semiconducting material (because it conducts heat VERY well).
- Grow a LARGE sheet of it, writing the domain doping and interconnects with ion beams as you go.
- TEST the components as you go:
- Negative power lead is a slow (low accelleration voltage) electron beam.
- Positive power lead is a fast (high accelleration voltage beam) electron beam, causing secondary emission of more electrons than are in the beam.
- Test injection probes are smaller versions of the power leads.
- Test probe is a very slow electron beam, where the electrons turn around at the surface, and a positively-charged region will suck 'em to the chip.
(These are all variants of electron microscope imaging hacks that were in use as far back as the 70s.)
- If a component fails, turn up the current, vaporize it, and deposit it again. Repeat until you have a good one.
- When you're done with the layer, don't stop. Deposit another layer, and another,
- Apply power to two opposite faces of the cube. Use bus bars the size of the cube face - at least near the contact point - to minimize IR drop. Use a good conductor, like copper or silver.
- You need a LOT of cooling. So circulate cooling liquid in the buss bars. (Copper and silver are also good heat conductors, and water is a terrific heat carrier.)
- The other four faces are for I/O. Use semiconductor lasers, photodiodes, and fiber optics light-pipes. You can COVER the faces with fibers. Put your drive electronics and SerDeses in the layer just under the pipes - or dope the index of refraction of the diamond to make a light-pipe into the depths and distribute them throughout the volume.
- Diamond is stable up to very high temperatures, but you need to protect it from air when it gets hot (or it will burn). So put it in a bottle with an inert gas just in case. Limitiing temperature structurally is about where it starts going over into graphite, so you can let it get up to a dull red glow (if your I/O is at some bluer color and that temperature doesn't create too much thermal noise).
- How big can you get? Square-cube law limits your I/O-to-computation ratio, since the I/O is on four faces that go with the square of the linear dimension, the computation goes (approximately) with the volume, or the cube of the dimension. The cooling-to-gate ratio suffers a similar square-cube issue (plus a linear penalty for power losses from the internal distribution busses). You also have an interconnect penalty - as you get bigger you have to give a higher fraction of your volume to power and signal lines (or signal repeaters), but this actually improves the square-cube problems. Finally, construction time is about proportional to number of computational elements. So let's pull a number out of nowhere and say two meters on a side.
Of course the punch line is what the device would look like:
- A six-foot cube of diamond.
- Glowing cherry red.
- In a glass bottle of inert gas.
- Supported by water-cooled silver bus bars.
- And connected to everything else by an enormous number of glass fiber light-pipes.
In other words, the kind of thing you'd expect to be the ship's brain in a late model Sklyark spacecraft, from one of George O. Smith's golden-age science fiction novels. B-)
====
This rap was always entertainment rather than a serious proposal, and is no doubt buggy. For instance: I hear doping diamond is a bit problematic. And these days I'd suggest doing chip-under-construction powering and testing using physical contacts and JTAG fullscan or a variant of the CrossCheck array, rather than (or to suplement) the electron beams.
But I hope the point is made that, for parallizable tasks at least, we still have a LONG way to go with improved geometry before we finally hit the wall.
Don't forget what Moore's law really says (Score:3, Insightful)
So logically we could continue on with the same speed processor and just have them get progressively cheaper. But hmm, I wonder whose profit margins this would affect? What he's setting us up for is that Intel will refuse to lower their prices. They'll continue to make the chips cheaper and cheaper but they won't sell them for any less.
I actually look forward to an end in ever increasing clock rates, because then we can all get back to programming skillfully and making tight efficient code.
Re:What about silicon-on-insulator (Score:3, Informative)
IBM has been using partially-depleted SOI which actually increases leakage current and therefore increases standby power.
Fully-depleted SOI should have lower leakage current due to better control over the transistor channel. While Intel doesn't call it SOI, they announced their "terahertz transistor" sometime last year which is actually a fully-depleted SOI device.
Another way to reduce leakage power would be to use dual-gates when building the transistor. There is a decent amount of research going on in this field. Dual gate would offer large decreases in leakage current.