Intel Plans CPU Naming Change 3192
Jemm writes "According to The Globe and Mail, Intel will start using performance numbers rather than clock speed to number their chips. 'Under the model number system, processors will be given numbers to describe their performance, in addition to being described as running at 2GHz or other speed.'"
Payback (Score:5, Insightful)
That's great. (Score:4, Insightful)
It might just be time for.... (Score:5, Insightful)
Really, the technical community needs to sit down and figure out a universal cross-platform benchmarking method.
Re:The Megahertz Myth (Score:5, Insightful)
Sounds fine to me. (Score:5, Insightful)
One of the effects I foresee is that consumers (and corporate management) will latch onto Intel's new system and use it to make hasty decisions and brag -- except this time, they have a better chance of being right. In a sense, Intel will have already done the work for them.
I see no problem with a marketing machine that actually helps to dispose of the "Megahertz Myth" in favor of a more accurate measurement of a chip's performance.
Re:The Megahertz Myth (Score:5, Insightful)
Re:It might just be time for.... (Score:5, Insightful)
Great, then we'd get what we have on the graphics card market; two giants spending significant amounts of time to make 3DMark run faster.
There are complexities and tradeoffs.... ah, forget it.
"It doesn't matter." (Score:5, Insightful)
"It doesn't matter."
I realize it sounds trite but these days, it's true. They can buy pretty much any new computer they can find and it's perfectly capable of doing what they want to do because, in truth, what they want to do rarely requires a state of the art machine. To simplify things further is the fact that comptuers are getting cheaper and you are getting way more for your money. Buying a new computer isn't the financial hardship it once was.
My mother doesn't care what kind of CPU is in her computer or how fast it is. She just wants to send email to her grandkids and play bridge and she can do that quite happily on a computer she can pick up at Wal*Mart for a few hundred bucks. Power to the people, indeed.
Re:Payback (Score:5, Insightful)
My fear is that this could start an inflationary "speed rating" arms race where the baseline keeps getting changed to pump numbers higher and higher. The AMD system was all good and well when it was more-or-less anchored to Intel processor MHz ratings for comparable performing processors, but what happens when Intel releases the P-IV 4800 "It's twice as fast as the old 2.4 GHz model!". Then AMD comes out with the Athlon XP 6000+, then we have the P-IV 7500 "this is really much faster than AMD's new processor, we swear" model. And so on ad nauseum.
Re:Payback (Score:2, Insightful)
Veracity is not always to be found on the Internet grasshopper. There are some things that are true, but cannot yet be seen.
Re:Perecursor to a change in design strategy? (Score:3, Insightful)
I suspect, to be honest, that it has as much to do with Intel's recently announced 64 bit desktop chip foray. Presuming they do something similar to AMD and have more general purpose registers for 64 bit mode, they need a way to recognise and market the advantage that that brings (because it sure doesn't bring any clock speed benefits). That is, this is potentially as much about Intel competing with their own chips as it is with AMD and Apple.
Jedidiah.
Well... (Score:5, Insightful)
And just what the hell are you going to do with all that information, let alone the average consumer? I seriously doubt most of the engineers at Intel or AMD could even take all that information and have a good idea of what Spec numbers or other benchmarks would look like. At some point, you've got to figure out a way to simply things so that most people can at least have a rudimentary understanding of what it is they're buying. AMD attempts to do that with the model numbering scheme, which is designed to denote the relative performance of each CPU. Intel is now moving to some sort of similar system, now that clock ramping on the P4 is reaching its limits.
There is no measurement of absolute performance. There is no single number that gives you an honest picture of how things are. You can take 100 benchmarks of different applications, and you'll still have only a relative idea of performance, at best. Intel would be lying if they sold you a chip rated at 2.4GHz, which was only actually running at 1GHz. AMD doesn't mention GHz, and until you can produce a 3GHz Thunderbird core Athlon, their model system is perfectly legitimate.
Re:Problem.. (Score:5, Insightful)
Re:Problem.. (Score:5, Insightful)
I don't think anyone can blame AMD for the switch and I think perhaps a standard benchmark/rating system might be in order.
Probably not realistic, but it would be nice.
Cheers
Re:It might just be time for.... (Score:5, Insightful)
That'd be nice, but the real world doesn't work so well in this regard. The platforms are different enough that all have different strengths. Your 300fps in Quake3 doesn't tell me squat about how fast Lightwave will render. If a program's optimized for one app but not another.. well shoot, there's another problem that a benchmark really cannot provide much insight into.
I'm sick of benchmarks anymore. Computers have too many little things going on that affect the overall result. The solution? There needs to be a broadening of what your computer does. Maybe voice recognition is the next big bfd. Maybe it's a flashy new interface that requires a lot more graphical power. Maybe it's getting more people interested in 3D rendering. Heck, I dunno.
I do know that my 'underpowered' laptop I'm writing this message on is still going strong and is still quite useful to me. I can't think of anything off the top of my hand (save for a few games I suppose, but I'm more of a console gamer anyway) that this thing won't do in some form. Heck, I bought it because the LCD runs at 1600 by 1200.
Maybe the next big thing isn't how fast the processor is, but how many you have running. I wouldn't mind having a render farm here.
Re:Problem.. (Score:5, Insightful)
If BOTH of them start these arbitrary rating systems, we won't even have THAT small bit of stability. Intel could easily release a '6000+' processor tomorrow with no regard to clock speed. AMD would have to follow suit, and on it goes.
Re:Problem.. (Score:5, Insightful)
Marketing of both companies are going to have a field day.
Re:Problem.. (Score:4, Insightful)
At least with Mhz it was harder to fake it, but Intel managed to increase clock speed without actually getting much more performance, so they even managed to play that system.
Re:Problem.. (Score:5, Insightful)
The problem is that AMD and Intel custom design their chips to perform better at different tasks/instructions. Then there is the problem of compilers. Was the SpecIntBase compiled with AMD and/or Intel specific instructions? Which versions? Is SSE2 faster on Intel than AMD? Was 3DNow substituted for a few SSE instructions in the benchmark? Did the newest version of Lightwave 3D take any of this into account? This type of thing can make a HUGE difference in performance.
I don't think there's a simple way through this at all other than common program benchmarking and even then there will be a lot of misleading (and often wrong) results.
Re:Problem.. (Score:5, Insightful)
The bad news: They will run like 4 GHz models.
A 4GHz Itanium, Pentium M, Alpha, UltraSPARC, or any other of the lower clock speed processors would be much beyond a 5000+ Pentium rating. The article said that the Pentium M, which is a great processor, is having trouble in the marketplace because people are used to the Hz rating. This will become more of an issue with multiprocessor systems and multicore processors or even with technologies like hyperthreading.
This has been done for years with cars. There are horsepower measurements displayed on car ads all the time. Of course there are many other performance measures like 0-60 times, torque, braking, etc. But those are usually only reported in enthusiest magazines (read: car geek stuff, like we are computer geeks).
I think this is going to be welcome by average consumers, but us geeks are still going to read Tom's Hardware and other media that are full of benchmarks and other performance measures.
Re:This may suggest that Moore's law is at it's en (Score:4, Insightful)
Yeah, you can do that when you do a complete core overhaul. Going from Northwood to Prescott is a fairly large change, but nowhere near as big a change as going from the PIII to the P4.
"But now we have the 31 stage Prescott and the about same clock rate.
If Intel thought it could keep bumping the clock rate up, they wouldn't move to something like AMD's performance rating. Yet here we are.
Something has changed."
What has changed is that Intel is having problems with the 90nm process, Prescott produces massive amounts of heat, the LGA 775 socket isn't going to solve those problems enough to ramp Prescott beyond 4GHz, if even that high, and the changes being made with the introduction of IA32-64 (aka AMD64) will give processors a pretty decent bump in performance.
Intel knows now that clock frequency ramps have limits. Sure, Bob Colwell told them as much when the P4 was being designed, but now they're actually slamming into walls of fire (heat). Right this second, they're not in such a serious situation that changing to performance ratings is necessary, but they will be fairly soon. Thus, if they do it now, it looks like a new initiative to give Intel an advantage in the marketplace. If they wait until their backs are against the wall, it looks like Intel is struggling to keep up and has lost its edge in the marketplace.
You see now why this is being done? It's just management finally starting to get a little smarter.
Re:Payback (Score:1, Insightful)
Re:Well... (Score:4, Insightful)
It is listed, in whitepapers. We're talking about marketing to the masses here. Tell me, do you think you can walk into a coffee shop and talk to the gal behind the counter about speculative execution for more than 10 seconds without getting her confused and bored? There's a fraction of a small percentage of people in this world who are capable of understanding all the parts of processor design. By confusing average folk with technical data, you're lying to them just as much as you are by using performance ratings. I'll bet I could go into detail about the original Pentium's design, explain all the things that were done to up the performance in really simple terms, and get a bunch of people excited about buying it so long as I never tell them its name.
Think about that for a moment - if I can sell a Pentium 200MHz system to a room full of people who could buy a Pentium 4 for the same price simply by talking up the complicated design specifics, am I any more honest than Intel is with its MHz listings, or AMD with its performance ratings?
Re:The Megahertz Myth (Score:2, Insightful)
Re:Ok, now this just pissess me off (Score:2, Insightful)
So why was my 25 Mhz DX Pentium faster than the 33 Mhz ones that came out after it as well as the 66 and 75 and most 100?
Maybe it was becuse the 33+ machines all had a extra wait state to hit memory that mine didn't have? Some of thouse computers did some benchmarks slightly faster but windows apps were slower.
Re:Well then... (Score:3, Insightful)
With this announcment, it looks like they're finally giving in and doing the sensible thing.
Re:Ok, now this just pissess me off (Score:5, Insightful)
Except, of course, that this isn't true either. True, mhz means something, but it's not even a good indicator within a processor line.
A 1000mhz processor will only be twice as fast as a 500mhz processor if the ram and the peripherals are ALSO twice as fast. Otherwise, it depends entirely in the workload whether the processor is faster. If your computer is basically just loading data from disk, copying it from one place to another with a simple transform, and sending it to the network or something similar, the 1000mhz processor may not be faster at all with the same ram! In fact, it could even be slower, if to get the right multiplier for the CPU, the front side bus speed was actually reduced (that does happen quite often) and hence the ram runs slower!
On the other hand, if your computer simply runs a tiny program (a few k) that fits entirely in the L1 cache, and almost never talks to main ram or the peripherals, then it may in fact run twice as fast when you double the clock speed.
In reality, real programs are somewhere in between, so to figure out whether it's worth it to get a faster processor or eg. buy more ram instead, or faster ram, or a 15krpm SCSI disk, or whatnot, you have to figure out what your computer is going to be doing and estimate accordingly. Or even better, test the actual machine out to see how fast it is before you buy a lot of them.
Re:Not entirely true (Score:5, Insightful)
This is true if your benchmark (or something) is able to effectively isolate the CPU. Otherwise, you have to start worrying about bus latency, page faults, and the speed of everything else in your computer.
There's also a myth that CPU performance equates to the performance of an entire computer. This one has folks going out and buying all-new computers when what they really needed to do was buy more RAM or uninstall RealPlayer, Gator, that weather program, etc.
This myth is definitely supported by Intel, which likes to run ads that imply that buying a Pentium MCCXVI processor will help you get better audio and video streams on that computer that's still dialing into AOL with a 28.8 modem.
We are talking about CPU speed (Score:5, Insightful)
Ya, it's not the be-all, end-all number. I noted that. The problem is that there is the thinking that somehow a BSified PR number will somehow be better. Errr, no. I'd prefer that all my components be rated in real, factual, terms. I can then use those to make SOME kind of meaningful comparison. I want to buy a 7200rpm harddrive, not a PR 12000+ harddrive. I want to buy 1024MB of RAM, not PR 3500+ of RAM.
Going to BS PR numbers improves NOTHING. You are still faced with the situation of picking which part you need to improve, only now, it's difficult to make any kind of sensible comparison.
well... (Score:5, Insightful)
whenever i had to consult people about their pc purchases, i found the best way that they understood was basically the 3 parts of the cpu.. mhz, bus speed, and cache memory..
your cpu is a vehicle.. the mhz is the speed the vehicle can carry stuff from one place to another (this is what you are buying this ehicle to do - moving stuff) the bus speed is how fast you can load your stuff onto your vehicle.. and the cache memory is the amount of stuff the vehicle can carry...
then i go to explain how whats the point in having vehicle A that can go 1.5 times faster than vehicle B, but vehicle B can carry twice as much stuff each trip.. in the end Vehicle B is the one that gets more done.. until you get into things like it doesnt matter how fast vehicle A can go, if vehicle B can be loaded and on its way and back in the same time that A is still being loaded (bus speed)
its probly not the most refined explaination, but its the way i've talked many people into getting athelons instead of celerons, and in the end getting a better computer (dunno about the states but up here i can get an XP2200 for about the same price as a celeron 2ghz -give or take $5- and we're talking HUGE difference in performance)
Re:Payback (Score:1, Insightful)
Apple isn't developing the PowerPC, IBM is. So, if anybody matters in the non-x86 CPU game, it's IBM. Apple is basically just an upscale systems integrator.
This will translate into more sales as Apple is now finding out with significant interest in the G5 Xserve from a large number of corporations and government agencies.
Maybe the interest appears large by Apple standards, but in the market overall, Apple's Xserve and G5-based machines are niche machines and they don't really offer compelling performance advantages--high-end Opteron and P4 system have similar SPECmarks at similar prices. And OS X is severely handicapped in the market relative to Linux and Windows--OS X just isn't used very widely as a server operating system.
So, if Intel can get around some of the performance bottlenecks and deal with the loss of backwards compatibility, they may be able to get back on track.
Intel did miscalculate with Itanium. But the threat to Intel is AMD, not PPC.
Re:Pentium M (Score:4, Insightful)
I hope you also explained that he got the same, if not more, power as an Intel P4 3GHz, for a cheaper price. It would be silly to educate people about what AMD ratings are not, without explaining what they really are.
Re:Problem.. (Score:4, Insightful)
Re:Payback (Score:3, Insightful)
Right and for sand the teaspoons might be more efficient because less sand slips off them but for dirt the shovel might be better.
That's the whole point, it's not how quickly the processor cycles or even how much the processor does in one instruction. Rather, it's how well the processor works for some common tasks. In order to totally judge several processors you first have to test them in several different ways and then you can say, "In general, processor X is good for modeling climate because it handles floating points well and processor Y is good for image processing because it handles integers well."
This means that often there will be no one clear winner in a processor comparison and it may just come down to what you need the chip for and how well you understand how to use it. Right now, however, you have Intel pushing the idea that a high clock-rate processor is all that matters. This is misleading because most of the high-clockrate processors achieve this kind of performance by taking the risk of branch mispredictions and also by taking multiple cycles per instruction. These sort of things have an extreme negative effect on performance so much of the clock speed is wasted.
Re:Scalability (Score:3, Insightful)
In in the words of my computer architecture prof (Score:5, Insightful)
Re:The problem is (Score:3, Insightful)
Should they? No.
Will they? Inevitably, yes. It sells more product.
Horse Power in cars is one example, but I think a better is home stereo systems. Things have been getting better lately because the industry has started to regulate itself, but it's still not uncommon to see 2000 WATTS in huge letters on a boombox that may be able to pump out 50. The worst example of this I've seen are a pair of $15 computers speakers labelled 1000W. They just take the largest Voltage they can pump through the speakers, and the largest Current that it can handle, multiply them together, and write this number on the box. Nevermind the fact that the max voltage and max current either a) can't actually happen at the same time (as in the 1000W) or b) can only be sustained for milli- or micro-seconds in a laboratory enviroment, while playing a perfect sine wave.
But just as these stereo systems have the bullshit P.M.P.O. ratings, there is always, somewhere on the box, a true RMS value as well. Likewise, even though an AMD processor is labelled 2400+ it still says that it's 2.0Ghz @ 266 DDR. Engine manuals state not only horsepower, but torque, maximum RPM, etc, etc... This is for those of us in the know who use these real, informative values to decide what to buy.
As to your example, yes the P4 8000 -does- mean something. It means the CPU is running at 4Ghz (/2). The point is that these bullshit P.R. numbers will always translate to, or be accompanied by, real values.. and if they're not, vote with your wallet, and don't buy from that manufacturer.
Re:Payback (Score:2, Insightful)
Unlike a "slippery slope" argument, I am not starting from the proposal of a small but reasonable compromise or exception to the rules and then concluding that all the rules might be thrown out next. I am starting with the proposal that "the rules" (in the context of our discussion) are being thrown out and simply observing the likely outcome when the motivations of the involved parties are taken into account.
Re:Problem.. (Score:5, Insightful)
A 2g Celeron performs as fast a a 2g P4, right?
I think this train left the station a long time ago.
you are an Apple marketing victim (Score:4, Insightful)
Let's look at some of your claims:
You other examples either refer to system integration issues (e.g., supposed first use of a 3 1/2" floppy--developed by Sony), or are vague and meaningless from a technological point of view.
For a few years, Apple had an R&D department that actually published a little and was fairly high quality. However, I can't think of any fundamental breakthroughs that came out of that, and they disappeared again in the mid-1990's.
In addition to demonstrating your ignorance, I find your posting just offensive: I actually know some of the people who developed the technologies you talk about and I assure you that they didn't work at Apple when they did it. For their own financial gain, Apple has deliberately created the impression that they invented a lot of things that they didn't invent at all--and you fell for that dishonest marketing. Read up on the history of computing--you'll be surprised what you find.
So, Intel "admits" that there is MHz myth? (Score:1, Insightful)
Re:Check out some of those TPC results (Score:3, Insightful)
The cost of software is a rather small part of the cost for a TPC score. Even on the "cheap" systems (the cheapest system on that top-10 lists costs $32,772, and most cost about $50,000), hard disks are the dominant cost factor.
Perhaps an interesting flip-side to this argument is to look at the list of fastest systems overall [tpc.org].
Linux fanboys will be happy to know that their OS powers the most powerful system in this test (albeit through the use of a cluster while a known-weakness of the TPC-C test is that clusters can produce somewhat unrealisticly good results), while MS only appears in 3 of the top-10 systems. IBM's AIX is the most common operating system (4 systems) while Oracle is the most common database (also 4 entries). Linux fanboys may actually have good reason to show off this first-place result though, because with a system cost of $6.5M, HP almost certainly wasn't using the free OS for any sort of price advantage. Rather it may offer a performance advantage over Microsoft or even HP's own HP-UX.
For a supposedly clueful forum (Score:3, Insightful)
By and large these hardware sites know absolutely fuck all about anything except advertising revenue and click thru.
I'm sat here typing this on a P4 / 2.6 Ghz / 800 mhz fsb / a-bit box, prior to this is was a xp1900+ / a-bit box, why the switch? Intel is FAR quieter as well as representing a big jump in performance... sure, I could have gotten damn siminal performance from an overclocked xp2500+, at the expense of cpu core MTBF and at the expense of my fucking ears being assaulted by fans whining away.
At the end of the day it makes no odds on the desktop, my cpu, like most of them, spends most of its life and 5% utilisation, and in the server only a fool would use a cpu with a lower standard of thermal management than intel.
(I still miss my old cobalt raq2 that didn't even require a bloody CPU heatsink, much less heatsink and fan...)
Re:you are an Apple marketing victim (Score:2, Insightful)
The Xerox Star shipped in 1981, two years before the Lisa. It had a GUI, Ethernet, WYSIWYG editing, printed to laser printers, and was used by office workers.
PARC "invented" the laser printer,
Why do you put that in quotes? Unlike the stuff coming from Apple, the laser printer really was a ground breaking, new technology: a completely new approach for putting ink on paper under computer control.
but it was Apple who heavily underwrote a new company by the name of Adobe and co-developed the laser printer for use with the personal computer.
So, Apple financed product development based on technologies developed elsewhere.
I'll give you that technically, but I used an early Psion in 1986 or so and it was not really a functional information manager. The Newton 120 that I owned a couple of years later was a true PDA that allowed for word processing, information management, communication for email and early Internet via modem and IR, and more.
The Newton was basically a shrunk-down pen-based computer--nothing new there, only better product design. As for PDAs, PARCTAB was much closer to modern PDAs and predates the Newton.
Laptop form factor!(not laptop) with palm rests in front of a full sized keyboard with trackball or (later) trackpad was the innovation there. All of the previous laptops I have owned have been awkward with keyboards up front with no place to rest your hands and no pointing device integral to the laptop.
The Atari Stacy had an integrated pointing device in 1989, several years before the first Powerbook. The integral wrist rests on the Powerbook may have been a new design feature, but Apple itself has moved away from them and moved the keyboard forward again, with just enough room to accomodate the trackpad (which, incidentally, also was not invented by Apple).
"The Apple II was irrelevant to speech recognition research and development" My point still stands, that the first speech synthesis was developed years before anybody else on the Apple ][.
The Apple II was also irrelevant to speech synthesis. The history of electronic speech synthesis goes back to the 1930's. By the time Apple appeared on the scene as a company, people already had a sophisticated algorithmic understanding of how to process speech on computers. Apple made no ground-breaking contributions to speech synthesis, and they never shipped anything that was even close to state-of-the-art in either area.
Consumer digital camera! is what I said. I remember the MavicaPro series and they were hideously expensive. The Quicktake was actually affordable by the consumer.
Again, that's system integration. The underlying technologies (CCD, flash, DSP) were developed elsewhere and the components were produced elsewhere. Even the design came from Sony. All Apple did was to time things right and to cut enough corners to be able to ship a digital camera at a marginally acceptable price for a brief period.
I [...] am grateful that Apple began shipping computers with CD-ROM drives in them for just this reason.
CD-ROMs had been used as a software distribution medium by others. Contrary to what you may think, Microsoft and Apple weren't the first companies to ship bloatware--UNIX vendors had them beat by many years.
Plug and play compatibility is something that is also a huge time saver.
Too bad that Apple didn't invent it. NuBus came from MIT and was commercialized by TI before Apple picked it for the Macintosh II. Again, Apple's role was that of systems integrator.
First to include built in networking is meaningless? There is this thing you are using called the Internet.........
Not like AMD's system! (Score:3, Insightful)
Instead, Intel's going to take something like "800 MHz FSB, 1MB L2 Cache" and make that a number. Of course the higher numbers will be those that should perform better, but that's always how it is with model numbers.
In my opinion this can only be a good thing, because instead of having to know the difference between P4 A/B/C/E, instead there'll be a number that encapsulates the non-clock speed related statistics.
In any case, these numbers are not intended to compare Intel chips to other manufacturers, rather to allow the different P4s running at 3.2 GHz apart (for example).
Re:Well... (Score:3, Insightful)
Great, I run Maya. Now, does my exclusive 15 minutes include the 5+ hours it's going to take to send the software to them? Also, will Intel indemnify me against the makers of Maya for any copyright infringment suits that come from my sending it to Intel in violation of the licensing? Also, do I get to custom-configure the memory, hard drive, video card, power supply, mainboard, etc in the computer to my exact specifications so as to get an accurate picture of the performance I'd see under my specific system configuration?
It's a decent idea, but unworkable in the real world.