Intel Scraps Plan For 4 Ghz P4 Chip 379
bizpile writes "It was reported earlier that Intel would be delaying the release of their 4Ghz Pentium 4 chips, but it now appears that they will be cancelling them altogether. The announcement came Thursday and Intel says they are going to rely on approaches besides faster clock speed to improve the performance of chips. Engineers are working to add additional cores to a single chip and improving the efficiency in how the chips interact with the rest of the system. Intel spokesman Chuck Mulloy said, "Those are the sort of things where you get more capability out of a processor by designing specific silicon solutions as opposed to just keep turning the clock faster." In the meantime, Intel is planning on releasing a 3.8 Ghz chip with 2mb of cache."
At last! Intel realizes that.... (Score:5, Insightful)
Re:At last! Intel realizes that.... (Score:4, Insightful)
Re:At last! Intel realizes that.... (Score:5, Funny)
"Processor numbers will be categorized in 3-digit numerical sequences such as 7xx, 5xx, or 3xx."
I'll bet dollars to donuts that the ad guy who came up with the new naming system owns a BMW.
-B
Re:At last! Intel realizes that.... (Score:3, Funny)
"I'll bet dollars to donuts that the ad guy who came up with the new naming system owns a BMW."
Actually I think is a silver Carerra (may be mistaken as I don't work in the processor group)
Re:At last! Intel realizes that.... (Score:3, Funny)
Unfortunately the model number will be the same as the price in dollars...
Re:At last! Intel realizes that.... (Score:3, Funny)
Re:At last! Intel realizes that.... (Score:4, Informative)
Either that, or he owns an Opteron server and AMD already took all the even numbers... [amd.com]
BZZT! (Score:5, Insightful)
Re:BZZT! (Score:3, Informative)
Sparc and Alpha processors were the same way, to some extent. Basicly, Intel racked 1 category
Re:At last! Intel realizes that.... (Score:5, Insightful)
Re:At last! Intel realizes that.... (Score:2)
Re:At last! Intel realizes that.... (Score:3, Insightful)
(A premature one, too, surely; multi-core and Pentium-M-based desktop kit isn't due for ages is it? And won't multi-core chips have to be developed from P-M tech anyway? I can't see *two* Prescotts on one die being easily coolable...)
Bunging more cache on the chip is a
Yeah...and their PR department finally conceeded.. (Score:5, Insightful)
OR (Score:3, Insightful)
They'd be FORCED to use a numbering scheme because any conspicuous lowering of the MHz would cause Joe Shmoe to say "What the hell?"
Re:Yeah...and their PR department finally conceede (Score:5, Interesting)
The slashdot crowd is quick to attack Intel because they're the big guys, but the NetBurst architecture is an extremely powerful and (gasp!) good architecture. While the engineers designing it designed a processor for maximum pipelinability (over 30 stages now) this is not really a bad thing. Pipelining a processor is a good thing in general. Its main claim to usage is that it allows a processor to run at a higher clock speed. That is what pipelining was created for; to break down the time into smaller slices so more can occur in parallell. This process works great when each stage is of approximately equal length, and I have enough faith in the Intel engineers that no single stage was much longer then the next longest stage.
Back to the point though the pipeline does have downsides. A processor with 20 stages will lose ~ twice as many cycles on a branch missprediction (and more on a cache miss, but that number varies further) when compared to a 10 stage processor. However assuming that by using 20 stages we cut the cycle length by even 50% the additional stages were worthwhile. Cache misses are not a "common" event and branch prediction is in the 95+% range now, so the stalls added there are not as large as you'd think.
What the pentium 4 has done was manifest these to a larger scale. Unfortunately the engineers desiging the processor did not realize the massive leakage currents that are seen with processors at the speeds Intel is using. From a computer architect's standpoint they build upon past assumptions, and more stages in a pipe generally help out, so thats what they did. While the end result is not as impressive as they were hoping the end result is not a poor product.
Now what has the NetBurst architecture offered to the consumers? Well one of the main offerings its had is building an SMT processor (hyperthreading in marketing speak). SMT is more then mere marketing hype. It was not an afterthought thrown onto the P4 due to less then stellar performance as people have hinted at. SMT was originally designed for the Alpha ev8 chip that was scrapped. Intel however bought the alpha design team and used the SMT technology (albeit to a lesser extent then some would hope for) in the NetBurst architecture.
What else has NetBurst added? The trace cache is a wonderful feature as well. This removes the x86 decode logic from the runtime pipeline for most instructions.
So where can Intel go from here? My hope isn't so much in the multicore logic that some talk about. While multicore is interesting, I personally would rather see a wider P4 core (more execution units) and have them extend their implementation of SMT to allow for more concurrent threads of execution. a 4 or 8 way SMT processor could show some real results.
And for those of you who are going to question what I'm saying... No I don't work for Intel. And no my desktop processor is not an Intel processor either (I run an athlon 1600 for my workstation). However in my lab I am working on algorithms designed specifically around SMT processors (as well as cache aware/prefetching enabled applications). Intel's processors happen to enable quite a bit of optimization if done properly.
While I never agreed with Intel playing the MHz game, or their ridiculous prices, I would not say that the engineers were completely against the super-pipelining of the NetBurst architecture. While they may have questioned the reasons behind it, the real world performance gain does exist do to it.
Philip Garcia
Re:At last! Intel realizes that.... (Score:2, Insightful)
Mhz do not always = Sales.
By some accounts AMD and VIA have up to 40% of the global processor market now.
Re:At last! Intel realizes that.... (Score:4, Insightful)
Mhz do not always = performance!
Yes, but only when they have a hard time increasing the clock speed do they "realize" it. It's no coincidence they didn't say this during the days of 2 GHz Pentium's, but is doing it now... Always spend the minimum effort of improving the architecture when you can just crank up the clock speed and show your customers it's the best thing to do.
But I guess they've waited with this announcement (it was actually true since the day Intel designed their first microprocesor) because they dread the day when they have to start explaining how higher clock speeds aren't really everything.
Re:At last! Intel realizes that.... (Score:2, Interesting)
Re:At last! Intel realizes that.... (Score:3, Interesting)
I know it's not a multicore device, but this [orionmulti.com] is an example of what's possible. 36 Gflops @ 220 Watts. (24 Gigs RAM, 1TB storage, $10,000) I want one.
Re:At last! Intel realizes that.... (Score:3, Insightful)
In any event, multicore will continue to fall short of expected performance since the software cant handle it.
It has always been this way, never go to dual cpu till you maxed out the single one.
Even then you will find that going multi-cpu is the opposite of the way softare companies want to go. They want to pay less to their programmers, but if they go multicore, they will have to pay more for people that can write code to take advantage.
Honestly, c
Re:At last! Intel realizes that.... (Score:2)
Whee (Score:2, Insightful)
Re:Whee (Score:2)
Re:Whee (Score:2, Interesting)
On the other hand, the Athlon XP range is End-Of-Lined.
Let's talk Athlon64 and Opteron instead.
Intel will have to put the memory controller onto the CPU sooner or later. If they want to go "Not Invented Here", it's going to cost them $$$ BIG $$$ BUCKS $$$ in cache.
Re:Whee (Score:2)
For starters if a new RAM type comes out you can't upgrade your board. No siree bob! New CPU for you, buddy! Want to run your CPU in dual channel mode? Well you gotta throw out all your socket 754 gear and get ready to pay for it all over again for socket 939 kit.
Secondly it takes away transistors from the CPU when it could be devoted to more cache. Let's face it. No matter how much latency you save, it's a
AMD (Score:5, Insightful)
Hell, Intel has spend DECADES convincing the public that MHZ is king and now they are (once again) following AMD's lead.
HA!
-Charles
That is irrelevant (Score:5, Insightful)
It will be difficult for them to apply as much inertia into another simple metric that the public will understand and by whose measure they will be able to remain the clear leader. They need to come up with another marketing story that pushes yet another metric that is again closely tied to their process superiority. I don't know what this is, but I'm sure they have a new story that we will see when they do their multi-core HT rollout.
AMD did not exactly "win" simply because they gave up the MHz war so soon. Yes, they were the first, but they didn't have much of a choice since they knew they could not scale to 65nm process geometry like Intel could. They had to alter their architecture earlier. Intel did not, and it worked in their favor for more years.
It is obvious from the past that Intel's marketing story will never resemble AMD's. They are not "following AMDs lead" unless by that you mean they were able to scale clock speed for a longer time than AMD was.
Re:That is irrelevant (Score:2)
Because everybody needs a 1024 core chip to run 50 threads.
Re:AMD (Score:4, Interesting)
Re:AMD (Score:2)
K8 followed by intel with P6
AMD64 followed by intel with EMT64
Yipes! (Score:2)
Seriously though, this seems like just what geeks have been saying for about a decade now -- clock speed isn't the be-all-and-end-all of CPU wars. Looks like Intel is agreeing with us!
--
Free gMail invites! [slashdot.org] (with references from the folks who're already there)
Re:Yipes! (Score:5, Funny)
Sometimes less is Moore.
Re:Yipes! (Score:3, Insightful)
Re:Yipes! (Score:4, Informative)
more precice: doubling of Silicon's capibility or doing the same at half the size (die space) IIRC.
-nB
Re:Yipes! (Score:3, Informative)
bits (Score:2, Funny)
Re:bits (Score:2, Funny)
Since when is compiling Gentoo not considered a game?
Re:bits (Score:2, Funny)
Re:bits (Score:2)
Re:bits (Score:2)
Good idea! Then we can embed these chips into machines sold to SCO and reprogram their exec's DNA into "weasel"!
Finally (Score:3, Interesting)
It's news, just not big news (Score:5, Insightful)
Re:It's news, just not big news (Score:3, Insightful)
Cache is a big deal, think of the duron/athlon difference. Frontside bus is big, 266 and 533 FSB yield different performance metrics. And of course harddrive speeds havent changed much since the ATA100 since the Pentium2 days. Theyve changed, but not as much. Thats anothe
Re:It's news, just not big news (Score:2)
http://msdn.microsoft.com/Longhorn/Support/lhde v fa q/
It's still too early to determine the final hardware requirements. You can run the developer preview on a typical machine from the past two years, although it's a better exper
Re:Consumers aren't logical (Score:5, Insightful)
What you're not getting here is that it is INTEL that has been behind the clock speed myth. They have spent untold millions (billions??) teaching people that the speed of a computer is best measured by the clock speed of its CPU. For the last decade, that and "Intel Inside" have been their ENTIRE marketing message. The consumers believe that clockspeed matters because Intel is the one that told them so.
Now, for a long time, this has worked really well for them. They pretty much destroyed Cyrix this way, and AMD has been struggling for many years. Cyrix came up with their PR-ratings to try to be competitive, but their chips weren't very good and didn't deliver on their promise, and they sank into obscurity. AMD did the exact same thing with their + ratings, but they were so conservative about them at first that people accepted them. (this gave them some weasel room later, as they have gotten very nearly deceptive with the ratings on some of their CPU lines, particularly the Sempron.) They had to do this because Intel had taught everyone that it was megahertz that counted: AMD couldn't deliver that, just performance. Basically, they got lucky. Had consumers not accepted those ratings as accurate, AMD would probably be gone now. Apple was in the same boat, as well. With a less rabid fan base, they'd be gone too.
Around the time of Rambus, the marketers took over Intel. They realized that the megahertz message was working fabulously well. It appears that they decreed that all future engineering efforts in the Pentium line would be oriented around cranking up the clockspeed. The engineers delivered what they were told to, a chip that could be scaled a very long way, by going to a hyperpipelined approach. I believe their first P4 was clocked somewhere around 1.2ghz, and it was HORRIBLY slow because of the pipelining; a 1ghz P3 absolutely destroyed the P4. In other words, the P4 was a big step BACKWARDS from the P3 in nearly every way.
But then they started to crank the megahertz, expecting to leap way out in front of AMD and, once again, dominate everything. (Nevermind that it wasn't until the P4 hit about 2.4ghz and got an 800mhz bus that it started to actually get good.) RAM speeds in particular had to do a lot of catching up. A hyperpipelined approach suffers terribly from a mispredicted branch. The CPU stalls completely until the pipeline can be refilled, which kills performance. You need the fastest possible RAM to refill the pipeline as quickly as possible. (and this, btw, is why AMD isn't as desperately dependent on fast memory; its pipeline is about half as long as the P4's, and thus it doesn't choke as badly if it guesses wrong about a branch.) [and thanks to Ars Technica for the knowledge to write this last paragraph
So all of a sudden, over the last year or so, Intel suddenly ran into a brick wall. Their entire chip design culture is clockspeed, not performance, and abruptly they can't crank clockspeed anymore. This is a BIG DEAL, because they're going to have to tear apart and rework EVERYTHING internally. This blunder is going to cost them billions, and if AMD keeps executing as well as they have recently, they could lose a great deal of marketshare. They are already losing mindshare, since AMD got to specify the instruction set for 64-bit X86.
Intel is in TROUBLE. The focus of their entire company, their raison d'etre, no longer exists. They forgot they were actually about performance. Many of their existing projects will have to be scrapped, and they'll have to reorient most of the company in very short order, while still maintaining morale.
If anything can save them, it's the Pentium-M, which is an extraordinary piece of technology out of their Israeli branch. In many respects, the M is the direction Intel should have gone five years ago.
Can they make up for this vast blunder? It's a good question, but I wouldn't count them out just yet. If the engineers
Re:Consumers aren't logical (Score:3, Informative)
Re:Consumers aren't logical (Score:5, Interesting)
My primary focus at that time was on servers; for pretty much any application you could name, a P3 just spanked a P4 for a long time. Intel even shipped a few 1.4ghz P3s with double-sized cache, but then stopped when folks realized that this chip significantly outperformed much "faster" P4s. Yes, there were some desktop apps that really benefited from the P4, like video encoding, but as general-purpose chips, the P4 was inferior for a long time. The double-cache, high clock speed P3, which was an EXCELLENT solution for many problems, interfered with the marketing message, and was killed.
Every prior generation of chip was a substantial step forward, particularly up to the Pentium. Every chip through the Pentium II roughly doubled the performance of the fastest chip of the previous generation. The P3 was a significant improvement, but was more like a 50% bump. The P4, on the other hand, was a step BACKWARDS; the fastest P4s were slower than the fastest P3s when it shipped, and remained so for quite some time. It wasn't until the front speed bus got to 533mhz and the main clockspeed got to about 2.2 gigahertz that the P4 finally, truly started to win on raw speed... and on value (price/performance), it took longer still. And I'm totally ignoring heat and power, which can be big issues in some circumstances.
It's no mistake that the Pentium M is so darn fast for its clockspeed; it is, essentially, the old P3 architecture with a number of enhancements for low power usage. And it is electrically compatible with the P4. All a motherboard would have to do, in order to support it as a desktop CPU, is provide a different socket. I have no idea why you can't buy desktop boards for the Pentium M, it would be trivial to do. I assume it is, once again, interference with the marketing message.
Had Intel not focused so much on clock speed to the exclusion of all else, they could just start selling Pentium-Ms instead: they're ideally suited for multi-core. But they didn't, and now they have two very large problems at once, both technical and marketing. They have to revamp their engineering approach and re-educate their customers simultaneously, undoing 10+ years of momentum in both areas, without destroying their existing business. Not easy.
The implications of this are exciting (Score:2, Interesting)
Isn't this a full quarter in advance of what we expected? Won't this put their release in the same window as AMDs multi-core release?
Re:The implications of this are exciting (Score:2)
Re:The implications of this are exciting (Score:2)
Bound to happen sooner or later (Score:4, Interesting)
Electrons on copper travel 3cm per nanosecond. At four Gigahertz, each clock cycle, the electrons can only travel a theoretical maximum of 0.75cm. I don't even think that covers the diameter of a single core these days.
You can't turn up the clock much faster than it's already going without getting into nanotechnology. The only viable solution is to optimize chip efficiency through other means, and add more cores to the chip working in parallel.
Re:Bound to happen sooner or later (Score:2, Informative)
Re:Bound to happen sooner or later (Score:5, Informative)
Re:Bound to happen sooner or later (Score:2, Informative)
Re:Bound to happen sooner or later (Score:2)
I think the parent was talking about regular conducting materials above room temperature.
Re:Bound to happen sooner or later (Score:2)
A better argument is that the core is designed in a manner that signals don't have to cross it in a single clock cycle.
Still, we are getting close to the limit of c. Maybe we can double in speed once more, but maybe not...
Re:Bound to happen sooner or later (Score:2)
Re:Bound to happen sooner or later (Score:3, Informative)
Re:Bound to happen sooner or later (Score:5, Informative)
Charge carriers propagate at about the speed of molasses. Go read this website, it is great:
http://amasci.com/miscon/eleca.html#light
Here's an excerpt --
THE "ELECTRICITY" INSIDE OF WIRES MOVES AT THE SPEED OF LIGHT? Wrong.
In metals, electric current is a flow of electrons. Many books claim that these electrons flow at the speed of light. This is incorrect. Electrons actually flow quite slowly, at speeds on the order of centimeters per minute. And in AC circuits the electrons don't really flow at all, instead they sit in place and vibrate. It's the energy in the circuit which flows fast, not the electrons. Metals are always full of movable electrons, and when the electrons at one point in the circuit are pumped, electrons in the entire loop of the circuit are forced to flow, and energy spreads almost instantly throughout the entire circuit. This happens even though the electrons move very slowly.
Re:Bound to happen sooner or later (Score:3, Insightful)
You also have to remember that the whole thing is probably clocked at least partially asynchronously. The on-die L2 cache doesn't need to operate at 4 GHz.
And modern semiconductor manufacturing *is* nanotechnology.
But, your ending conclusion still holds. Without changes in our understanding of physics and/or sub
This is cool, IMHO (Score:3, Insightful)
It just kills ppl when they see my Pentium Pro box keeping up with XP on a P4, for desktop stuff.
Re:This is cool, IMHO (Score:2, Funny)
>It just kills ppl when they see my Pentium Pro box keeping up
Do they die of laughter?
Re:This is cool, IMHO (Score:2)
Big news (Score:3, Insightful)
They've been doing this for a long time; basically all this says is that they're attempting to change the focus of their marketting from clock speed to other measures. I predict that consumers won't like it, and they'll go back to cranking up the marketting-clock-speeds ASAP.
Yawwn... (Score:2, Flamebait)
Running 2.8GHz overclocked (aircooled) to 3.4GHz here...
About stinkin' time. (Score:2)
About stinkin' time. How much of the typical users' workload is CPU-bound? Let's work on some of the other parts, like through-put to RAM and to disks.
CPU - bound applications (Score:2)
I tried to run Seti@home on a 2.8 GHz P4. Wonderful speed, but that damn fan noise (quiet - Ramp Up - REALLY LOUD - Ramp Down - quiet) bugged me, so I got rid of it.
It's about time. (Score:4, Interesting)
Same RAM, same disk, same video, but a new motherboard.
I *feel* like I'm getting more than a 28% speed boost from it, so it's clearly not just the clock speed that's doing it. Making a chip run faster never was the right idea, and I'm glad to see that they're walking away from that.
Now, if we can just get a core like the Pentium M, but for desktops, then maybe we'll see some real competition.
Re:It's about time. (Score:2)
Really want your peecee to fly? Add Dual Channel DDR RAM matched to your FSB speed and you'll see a much larger boost too.
Re:It's about time. (Score:2)
Re:It's about time. (Score:3, Informative)
Pentium-M mini ATX motherboard from AOpen:
http://www.watch.impress.co.jp/akiba/hotline/2004
Multi-cores (i.e. parallel processing) is clearly (Score:5, Insightful)
Ouch!
Yeah well Intel talks and talks... (Score:2, Interesting)
But...
Lets see what is actually going to happen. There are plenty of previous examples of Intel changing direction, and it is not always for obvious reasons. Remember slot1 and slot2, that Intel praised as a superior way to interface cpu's to motherboards as opposed to sockets, and when all came down to it, it was nothing but a stunt to try and make life harder for competitors.
Could
Re:Yeah well Intel talks and talks... (Score:3, Informative)
Eerily Reminiscent... (Score:4, Insightful)
Strange... (Score:3, Interesting)
Intel says they are going to rely on approaches besides faster clock speed to improve the performance of chips
Strange, I thought the point of the big numbers was to sell more chips, not to make them faster. Wasn't part of the reason that Intel made the P4 pipeline as long as it is so that they could keep cranking the MHz up for a long, long time so they'd have lots of generations of P4 processors to sell? Because I don't think you really need that long a pipeline for purely performance reasons.
I wonder if AMDs inroads into the 64 bit market have Intel getting a bit scared about the future?
Whuzzat? (Score:5, Funny)
So to sum up:
1) We've realized it's dumb to just keep increasing the clock speed.
2) Buy our new Pentium 4! It's going to have a higher clock speed!
~Philly
Heh, they probably found out... (Score:2)
Banias for desktops? (Score:4, Interesting)
The first incarnation of this is the Banias, also known as the Pentium M. It's basically a P3 pipeline, but with P4 branch prediction (and some other technologies). The P4 has to have very advanced branch prediction in order to even HOPE to get reasonably efficient use of its pipeline. Applying this to the P3's shorter pipeline results in a much higher IPC.
In other words, something philosophically like the Athlon.
Since then, I haven't heard anything about it. And then there's this article. Is there any relationship?
Re:Banias for desktops? (Score:3, Insightful)
This [theinquirer.net] is an intersting development... a P-M mobo for desktops. I personally would love one for a SFF box. But Intel says NO to P-Ms in desktops on a large scale. Wouldnt want to canabalize all those Prescott sales, would we?
Eff that! (Score:4, Interesting)
I don't want more power, I want a fast enough machine that runs silently.
I guess it's my fault for waiting for Intel to provide this instead of just buying a Mac.
3.8 ghz chip (Score:2)
Intel behind the curve (Score:2, Insightful)
Sigh, Except for 3D Rendering (Score:4, Interesting)
You can build super cheap (except for processors) computers to use in a renderfarm.. (I use Lightwave 3D, Modo, and SoftImage XSI)... and hard drive speed / graphics card speed / Memory speed / Cache on die, Do nothing to speed up a render once you hit that "Render" button. Sure... SSE extensions and the like do speed it up if the code is optimized... but there isn't really a way to optimize the code with this new direction Intel is going.
Re:Sigh, Except for 3D Rendering (Score:3, Interesting)
Or increase the number of processors. Turning th clock speed up is turning the heat up; I think that's probably the reason behind this announcement. One hyper-fast processor is not better than 4 medium-speed ones, especially if it draws 2kW and meltsdown everytime the water-cooling pump drops below 95% speed.
TWW
Not just MHz (Score:5, Informative)
didn't AIM say this years ago re: the PPC? (Score:4, Interesting)
But the droids blinkered by intel FUD put their fingers in their ears sang "lalalalala" and barked "NO - faster clock speed is a FASTER CHIP!!!"
Now, suddenly: oOooooo - cycles per second isn't as important!
Oh well. It will certainly be very interesting to see what Intel does over the next few years.
Here's an interesting question, related to this topic:
Assuming they go multicore (like IBM and Power[x] chips) what are the limits involved there? What would logically stop the development of multicore chips from increasing their number of cores?
And: What next?
RS
Prescott a failure? (Score:3, Interesting)
Intel Inventory of slow parts (Score:5, Interesting)
Intel released their Q3 results [intel.com] late Tuesday. In their conference call [intel.com] they were evasive about a suprising drop in their tax rate and also about the amount of their inventory writeoff. Intel claimed their inventory was down $43 million to $3.2 billion with an unspecified writeoff amount. Investors were happy to see inventory did not go up again and the stock went up Wednesday. In several [marketwatch.com] different [fool.com] articles [marketwatch.com] people are working out the mystery of the writeoff amount. Normally Intel's "cost of sales" is a steady number. Any writeoff will add to this number. So you can estimate the writeoff just by seeing how much this increased. With this calculation, it seems Intel had a writeoff of $472 million.
Well now for the rest of the PC (Score:3, Interesting)
Low latency and high bandwidth up the wazoo is one aspect that supercomputers for example have over standard pc components, besides massive parallelism of course.
It would be cool to see intel start making inroads from R&D on the memory front. I'm not talking about on-die cache, that is a given. The questions to be answered are how to get the main memory up to snuff with the rest of the system.
If the current state of the art in CPU power stagnated from here until 5 or more years from now, it really wouldn't be an issue if the same efforts during that time were put into lower latencies across the whole sytem architecture itself.
So what am I saying? The CPU has had enough innovation in it's current form. It's time to focus on other lagging components. Pci-x is a step in the right direction, but it is nothing without main memory advances and other mainboard bus architectural improvements.
Re:Well now for the rest of the PC (Score:3, Insightful)
How close are we to the Max clock speed? (Score:3, Interesting)
I have a 3.2 GHz Pentium 4. How far can light travel in one clock cycle at that speed?
186000 miles / 3.2 billion is about 3.7 inches isn't it?
Re:So much for Moore's Law (Score:5, Informative)
Re:Seems they are taking a cue from Apple not AMD (Score:5, Insightful)
no
If they wanted to get a cue from Apple, Intel would have switched us all to Open Firmware. They are very much taking a cue from AMD (specifically the original Alpha team that AMD hired for their snazzy new CPUs).
What would would a slashdot story be without the "Apple is the panacea for everything" post ? heh
Sunny Dubey
Re:Seems they are taking a cue from Apple not AMD (Score:2)
Re:Seems they are taking a cue from Apple not AMD (Score:2)
Re:moore's law limit (Score:2, Informative)
Those benchmarks doesn't mention the complexity, nor do they specify the number of transistors [wikipedia.org] on the CPUs, so I don't see how you can draw your conclusion.
Re:Intel's strategy is fairly obvious.... (Score:2)
First time? Isn't this at least their second time?
Re:Emulation (Score:3, Informative)
First, emulation IS "parallelizable". There is usually a decision: emulate, or translate, and if translating, how much optimization to apply. On a single processor machine, this is critical. It may take a great deal of time to translate; less time to emulate. If something is run once (or rarely), it doesn't make sense to translate. We can't afford the overhead.
On an MP (multi processor, or multi-core), we can emulate, and schedule
Re:MHz SmegaHz (Score:5, Insightful)
It's silly for people to think that clock speed doesn't matter, why else would people go through the trouble of overclocking their systems?
Yes, obviously if you increase the clock speed of a particular chip that chip will run faster. Duh. If you push the accelerator of a car further to the floor, the car goes faster. Your point? My Honda still gets better mileage than your Suburban.
You can't use megahertz to compare different chips, such as PPC vs. P4. It's a bullshit metric, and that's why it's worthless.
Intel should just bite the bullet and spend some more R&D on alternative active cooling solutions like liquid.
For fuck's sake, why don't you just go down to the beach and club a seal? Intel should be working on making their chips more energy efficient, not ignoring the massive amounts of waste heat and spending development money on idiot liquid cooled solutions. I mean COME ON. Liquid cooling is for things like GIANT PULSE LASERS and other exotic equipment that must be kept extremely cool. The fact that people are using it on microprocessors means that there is something fundamentally very, VERY wrong.
Liquid cooling isn't cool. Not only is it stupid, it indicates your lack of regard for the environment.
Perhaps doing some work increasing the L1 cache sizes would be beneficial.
This is essentially the only thing you've said that makes sense.