Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Intel

45 Years Later, Does Moore's Law Still Hold True? 214

Velcroman1 writes "Intel has packed just shy of a billion transistors into the 216 square millimeters of silicon that compose its latest chip, each one far, far thinner than a sliver of human hair. But this mind-blowing feat of engineering doesn't really surprise us, right? After all, that's just Moore's Law in action isn't it? In 1965, an article in "Electronics" magazine by Gordon Moore, the future founder of chip juggernaut Intel, predicted that computer processing power would double roughly every 18 months. Or maybe he said 12 months. Or was it 24 months? Actually, nowhere in the article did Moore actually spell out that famous declaration, nor does the word 'law' even appear in the article at all. Yet the idea has proved remarkably resilient over time, entering the zeitgeist and lodging like a stubborn computer virus you just can't eradicate. But does it hold true? Strangely, that seems to depend more than anything on whom you ask. 'Yes, it still matters, and yes we're still tracking it,' said Mark Bohr, Intel senior fellow and director of process architecture and integration. 'Semiconductor chips haven't actually tracked the progress predicted by Moore's law for many years,' said Tom Halfhill, the well respected chip analyst with industry bible the Microprocessor Report."
This discussion has been archived. No new comments can be posted.

45 Years Later, Does Moore's Law Still Hold True?

Comments Filter:
  • by kenrblan ( 1388237 ) on Tuesday January 04, 2011 @04:13PM (#34757516)
    Well said. I was about to post the same question. The progress definitely matters, but the prediction is really not much more than an engineering goal at this point. That goal is secondary to the goal of remaining the market leader. Without intending to start a flame war, I wish the programming side of computing was as interested in making things smaller and faster in code. Sure, there are plenty of academically oriented people working on it, but in practice it seems that most large software vendors lean on the crutch of improved hardware rather than writing tight code that is well optimized. Examples include Adobe, Microsoft, et al.
  • by vux984 ( 928602 ) on Tuesday January 04, 2011 @04:54PM (#34757972)

    It was easier to measure then, because performance was directly related to clock rate.

    It was easier to measure then because real world performance was actually doubling and was apparent in most benchmarks.

    Now that clock has stopped going up, performance depends on parallel processing.

    Performance isn't doubling anymore. Cores are increasing, and the pipelines are being reworked, cache is increasing, but PERFORMANCE isn't doubling.

    Then there's a catch, parallel processing depends on the software.

    It depends on the task itself being parallelizable in the first place, and many many tasks aren't.

    Luckily, the most demanding tasks in computing are those that can be parallelized.

    Unfortunately its the aggregate of a pile of small independent undemanding tasks that drags modern PCs to a crawl. And these aren't even bottlenecking the CPU itself... to be honest I don't know what the bottleneck is right now in some items... I'll open up the task manager... cpu utilization will be comfortably low on all cores, hard drive lights are idle so it shouldn't be waiting on IO... and the progress bar is just sitting there... literally 20-30 seconds later things start happening again... WHAT THE HELL? What are the possible bottlenecks that cause this?

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...