Forgot your password?
typodupeerror
Intel

Intel Plans CPU Naming Change 3192

Jemm writes "According to The Globe and Mail, Intel will start using performance numbers rather than clock speed to number their chips. 'Under the model number system, processors will be given numbers to describe their performance, in addition to being described as running at 2GHz or other speed.'"
This discussion has been archived. No new comments can be posted.

Intel Plans CPU Naming Change

Comments Filter:
  • Payback (Score:5, Insightful)

    by BWJones ( 18351 ) * on Saturday March 13, 2004 @07:55PM (#8555217) Homepage Journal
    Ahhhh, I am sure it will be said again here, but payback is in order. This sort of marketing angle will only go so far though as Apple and AMD have found out. What really matters is real power [apple.com]. This will translate into more sales as Apple is now finding out with significant interest in the G5 Xserve from a large number of corporations and government agencies. So, if Intel can get around some of the performance bottlenecks and deal with the loss of backwards compatibility, they may be able to get back on track.

  • That's great. (Score:4, Insightful)

    by JustinXB ( 756624 ) on Saturday March 13, 2004 @07:56PM (#8555232)
    They go from lying to you subliminally to lying to your face.
  • by kc0dby ( 522118 ) on Saturday March 13, 2004 @07:57PM (#8555240) Homepage
    It might just be time for a standard.

    Really, the technical community needs to sit down and figure out a universal cross-platform benchmarking method.

  • by Anonymous Coward on Saturday March 13, 2004 @07:58PM (#8555258)
    You don't believe Intel's FUD but happily believe and spread AMDs FUD? Good job :rolleyes:
  • Sounds fine to me. (Score:5, Insightful)

    by Faust7 ( 314817 ) on Saturday March 13, 2004 @08:00PM (#8555274) Homepage
    The planned system, which would focus on the chips' overall performance and de-emphasize how fast its chips run,

    One of the effects I foresee is that consumers (and corporate management) will latch onto Intel's new system and use it to make hasty decisions and brag -- except this time, they have a better chance of being right. In a sense, Intel will have already done the work for them.

    I see no problem with a marketing machine that actually helps to dispose of the "Megahertz Myth" in favor of a more accurate measurement of a chip's performance.
  • by Herbster ( 641217 ) * on Saturday March 13, 2004 @08:01PM (#8555283)
    Yes, but now we'll run into trouble because Intel's Performance Rating will be artificially larger than AMD's - I can't imagine Intel giving any CPU a lower PR than its MHz figure!
  • by eddy ( 18759 ) on Saturday March 13, 2004 @08:02PM (#8555288) Homepage Journal

    Great, then we'd get what we have on the graphics card market; two giants spending significant amounts of time to make 3DMark run faster.

    There are complexities and tradeoffs.... ah, forget it.

  • by gklinger ( 571901 ) on Saturday March 13, 2004 @08:16PM (#8555526)
    That's what I say when non-technical friends and family ask me questions about what kind of computer they should buy.

    "It doesn't matter."

    I realize it sounds trite but these days, it's true. They can buy pretty much any new computer they can find and it's perfectly capable of doing what they want to do because, in truth, what they want to do rarely requires a state of the art machine. To simplify things further is the fact that comptuers are getting cheaper and you are getting way more for your money. Buying a new computer isn't the financial hardship it once was.

    My mother doesn't care what kind of CPU is in her computer or how fast it is. She just wants to send email to her grandkids and play bridge and she can do that quite happily on a computer she can pick up at Wal*Mart for a few hundred bucks. Power to the people, indeed.

  • Re:Payback (Score:5, Insightful)

    by Fnkmaster ( 89084 ) on Saturday March 13, 2004 @08:18PM (#8555559)
    Payback? No, acknowledgement that the numeric marketing angle works and that they are getting beat out on price/performance by AMD.


    My fear is that this could start an inflationary "speed rating" arms race where the baseline keeps getting changed to pump numbers higher and higher. The AMD system was all good and well when it was more-or-less anchored to Intel processor MHz ratings for comparable performing processors, but what happens when Intel releases the P-IV 4800 "It's twice as fast as the old 2.4 GHz model!". Then AMD comes out with the Athlon XP 6000+, then we have the P-IV 7500 "this is really much faster than AMD's new processor, we swear" model. And so on ad nauseum.

  • Re:Payback (Score:2, Insightful)

    by Anonymous Coward on Saturday March 13, 2004 @08:18PM (#8555571)
    Link, please?

    Veracity is not always to be found on the Internet grasshopper. There are some things that are true, but cannot yet be seen.

  • by Coryoth ( 254751 ) on Saturday March 13, 2004 @08:27PM (#8555695) Homepage Journal
    This change in CPU naming might indicate a recognition that its rivals may overtake it in clockspeed. Perhaps they're planning strategic changes that could take them below Apple or AMD in clockspeed and want to jump on the "clockspeed ain't everything" bandwagon as soon as they can.

    I suspect, to be honest, that it has as much to do with Intel's recently announced 64 bit desktop chip foray. Presuming they do something similar to AMD and have more general purpose registers for 64 bit mode, they need a way to recognise and market the advantage that that brings (because it sure doesn't bring any clock speed benefits). That is, this is potentially as much about Intel competing with their own chips as it is with AMD and Apple.

    Jedidiah.
  • Well... (Score:5, Insightful)

    by Loki_1929 ( 550940 ) on Saturday March 13, 2004 @08:35PM (#8555831) Journal
    Just out of curiosity, what would you have them do? Are you saying that any time Intel or AMD wants to show you a CPU, they should list clock frequency, L1, L2, and L3 cache sizes, each of their individual latencies, main memory latency, clock multiplier, average IPC, number of pipeline stages, instruction set extensions (SSE, Powernow, etc), architectual information, die process size, average and max heat dissipation figures, speculative execution capabilities, out-of-order operation specs, core stepping and revisions, a picture of the actual die, and about 10,000 other things that contribute to performance?

    And just what the hell are you going to do with all that information, let alone the average consumer? I seriously doubt most of the engineers at Intel or AMD could even take all that information and have a good idea of what Spec numbers or other benchmarks would look like. At some point, you've got to figure out a way to simply things so that most people can at least have a rudimentary understanding of what it is they're buying. AMD attempts to do that with the model numbering scheme, which is designed to denote the relative performance of each CPU. Intel is now moving to some sort of similar system, now that clock ramping on the P4 is reaching its limits.

    There is no measurement of absolute performance. There is no single number that gives you an honest picture of how things are. You can take 100 benchmarks of different applications, and you'll still have only a relative idea of performance, at best. Intel would be lying if they sold you a chip rated at 2.4GHz, which was only actually running at 1GHz. AMD doesn't mention GHz, and until you can produce a 3GHz Thunderbird core Athlon, their model system is perfectly legitimate.

  • Re:Problem.. (Score:5, Insightful)

    by Anonymous Coward on Saturday March 13, 2004 @08:36PM (#8555844)
    How is that any worse than it already is? You already need benchmarks to see which processor is best for your application. It's not like naming processors after how many GHz they run at is any better.
  • Re:Problem.. (Score:5, Insightful)

    by Dalcius ( 587481 ) on Saturday March 13, 2004 @08:39PM (#8555889)
    It's important that numbers be sane, but when a ~2gig AMD chip can run with Intel chips clocked at a much higher speed, something needs to be done to let the public know in a non-technical fashion.

    I don't think anyone can blame AMD for the switch and I think perhaps a standard benchmark/rating system might be in order.

    Probably not realistic, but it would be nice.

    Cheers
  • by NanoGator ( 522640 ) on Saturday March 13, 2004 @08:39PM (#8555900) Homepage Journal
    "Really, the technical community needs to sit down and figure out a universal cross-platform benchmarking method."

    That'd be nice, but the real world doesn't work so well in this regard. The platforms are different enough that all have different strengths. Your 300fps in Quake3 doesn't tell me squat about how fast Lightwave will render. If a program's optimized for one app but not another.. well shoot, there's another problem that a benchmark really cannot provide much insight into.

    I'm sick of benchmarks anymore. Computers have too many little things going on that affect the overall result. The solution? There needs to be a broadening of what your computer does. Maybe voice recognition is the next big bfd. Maybe it's a flashy new interface that requires a lot more graphical power. Maybe it's getting more people interested in 3D rendering. Heck, I dunno.

    I do know that my 'underpowered' laptop I'm writing this message on is still going strong and is still quite useful to me. I can't think of anything off the top of my hand (save for a few games I suppose, but I'm more of a console gamer anyway) that this thing won't do in some form. Heck, I bought it because the LCD runs at 1600 by 1200.

    Maybe the next big thing isn't how fast the processor is, but how many you have running. I wouldn't mind having a render farm here.
  • Re:Problem.. (Score:5, Insightful)

    by Chordonblue ( 585047 ) on Saturday March 13, 2004 @08:41PM (#8555934) Journal
    The difference is that as of now, we at least have one platform (Intel) accurately stating clock speed. AMD generally keeps their performance ratings close to Intel's however they have stretched the meaning of their 'XXXX+' definitions where Intel simply could not.

    If BOTH of them start these arbitrary rating systems, we won't even have THAT small bit of stability. Intel could easily release a '6000+' processor tomorrow with no regard to clock speed. AMD would have to follow suit, and on it goes.

  • Re:Problem.. (Score:5, Insightful)

    by Chordonblue ( 585047 ) on Saturday March 13, 2004 @08:44PM (#8555991) Journal
    No, I don't blame AMD (except for the questionable ratings of a few of their later Athlon XP's), HOWEVER, without a stable GHz metric to build off of, things are bound to get messy.

    Marketing of both companies are going to have a field day.

  • Re:Problem.. (Score:4, Insightful)

    by GigsVT ( 208848 ) on Saturday March 13, 2004 @08:44PM (#8556002) Journal
    They would just game the benchmark. What you'd get were CPUs that were very good at benchmarks, and not so hot at other stuff.

    At least with Mhz it was harder to fake it, but Intel managed to increase clock speed without actually getting much more performance, so they even managed to play that system.
  • Re:Problem.. (Score:5, Insightful)

    by Chordonblue ( 585047 ) on Saturday March 13, 2004 @09:05PM (#8556359) Journal
    "Name chips based on the SpecIntBase score and be done with it!"

    The problem is that AMD and Intel custom design their chips to perform better at different tasks/instructions. Then there is the problem of compilers. Was the SpecIntBase compiled with AMD and/or Intel specific instructions? Which versions? Is SSE2 faster on Intel than AMD? Was 3DNow substituted for a few SSE instructions in the benchmark? Did the newest version of Lightwave 3D take any of this into account? This type of thing can make a HUGE difference in performance.

    I don't think there's a simple way through this at all other than common program benchmarking and even then there will be a lot of misleading (and often wrong) results.

  • Re:Problem.. (Score:5, Insightful)

    by hackstraw ( 262471 ) * on Saturday March 13, 2004 @09:05PM (#8556365)
    The good news: I think we're going to see '5000+' processors before the end of the year now.

    The bad news: They will run like 4 GHz models.


    A 4GHz Itanium, Pentium M, Alpha, UltraSPARC, or any other of the lower clock speed processors would be much beyond a 5000+ Pentium rating. The article said that the Pentium M, which is a great processor, is having trouble in the marketplace because people are used to the Hz rating. This will become more of an issue with multiprocessor systems and multicore processors or even with technologies like hyperthreading.

    This has been done for years with cars. There are horsepower measurements displayed on car ads all the time. Of course there are many other performance measures like 0-60 times, torque, braking, etc. But those are usually only reported in enthusiest magazines (read: car geek stuff, like we are computer geeks).

    I think this is going to be welcome by average consumers, but us geeks are still going to read Tom's Hardware and other media that are full of benchmarks and other performance measures.
  • by Loki_1929 ( 550940 ) on Saturday March 13, 2004 @09:16PM (#8556530) Journal
    "Even the earliest Pentium 4s were able to greatly out-clock the pentium III's when they first came out. "

    Yeah, you can do that when you do a complete core overhaul. Going from Northwood to Prescott is a fairly large change, but nowhere near as big a change as going from the PIII to the P4.

    "But now we have the 31 stage Prescott and the about same clock rate.
    If Intel thought it could keep bumping the clock rate up, they wouldn't move to something like AMD's performance rating. Yet here we are.
    Something has changed."


    What has changed is that Intel is having problems with the 90nm process, Prescott produces massive amounts of heat, the LGA 775 socket isn't going to solve those problems enough to ramp Prescott beyond 4GHz, if even that high, and the changes being made with the introduction of IA32-64 (aka AMD64) will give processors a pretty decent bump in performance.

    Intel knows now that clock frequency ramps have limits. Sure, Bob Colwell told them as much when the P4 was being designed, but now they're actually slamming into walls of fire (heat). Right this second, they're not in such a serious situation that changing to performance ratings is necessary, but they will be fairly soon. Thus, if they do it now, it looks like a new initiative to give Intel an advantage in the marketplace. If they wait until their backs are against the wall, it looks like Intel is struggling to keep up and has lost its edge in the marketplace.

    You see now why this is being done? It's just management finally starting to get a little smarter.

  • Re:Payback (Score:1, Insightful)

    by Anonymous Coward on Saturday March 13, 2004 @09:34PM (#8556813)
    Nice. Make up a whole bunch of crap but praise Apple and get mod points. Apple is really starting to take over the server market. Pretty soon they might even rate a full percentage point market share. I don't know where you've been but if anybody is going to knock Microsoft from the top of the server heap it is going to be Linux. Larger user community, completely open, lower license fees, compatibility with existing hardware and zero lock-in.
  • Re:Well... (Score:4, Insightful)

    by Loki_1929 ( 550940 ) on Saturday March 13, 2004 @09:36PM (#8556874) Journal
    "yes. whats wrong with that? those are all very important pieces of info. i would fully expect all that in well written liturature about a processor."

    It is listed, in whitepapers. We're talking about marketing to the masses here. Tell me, do you think you can walk into a coffee shop and talk to the gal behind the counter about speculative execution for more than 10 seconds without getting her confused and bored? There's a fraction of a small percentage of people in this world who are capable of understanding all the parts of processor design. By confusing average folk with technical data, you're lying to them just as much as you are by using performance ratings. I'll bet I could go into detail about the original Pentium's design, explain all the things that were done to up the performance in really simple terms, and get a bunch of people excited about buying it so long as I never tell them its name.

    Think about that for a moment - if I can sell a Pentium 200MHz system to a room full of people who could buy a Pentium 4 for the same price simply by talking up the complicated design specifics, am I any more honest than Intel is with its MHz listings, or AMD with its performance ratings?

  • by speeDDemon (nw) ( 643987 ) on Saturday March 13, 2004 @09:42PM (#8557028) Homepage
    How about this, I dont believe Intel's FUD because I build both Intel and AMD systems. Ive Benchmarked two 'similiar' systems and an AMD 2600+ does indeed out perform a P4 2.6Ghz chips. It also cos'ts nearly HALF the price here in AU
  • by thogard ( 43403 ) on Saturday March 13, 2004 @09:44PM (#8557099) Homepage
    When you compare a single architecture (meaning one kind of one brand of processor) mhz give a VERY good idea of how performance will scale.

    So why was my 25 Mhz DX Pentium faster than the 33 Mhz ones that came out after it as well as the 66 and 75 and most 100?
    Maybe it was becuse the 33+ machines all had a extra wait state to hit memory that mine didn't have? Some of thouse computers did some benchmarks slightly faster but windows apps were slower.
  • Re:Well then... (Score:3, Insightful)

    by Anonymous Coward on Saturday March 13, 2004 @09:48PM (#8557214)
    I can almost guarantee what this new naming move is about: a pre-anouncement of a desktop version of the Centrino (i.e. Pentium-m) CPU. For those who haven't been following, the pentium-m (completely different chip from the p4-m) is based on the p3 core, but with SSE2, big-ol cache, and some advanced heat-management thingamajigs(TM). It runs clock-per-clock much faster than p4 (as p3s always have).. the 1.7ghz version (fastest currently available) runs comparably to a p4 2.4ghz (or faster, depending on who's doing the benchmark) yet runs dramatically cooler... it is an all-around superior chip to the p4 (and athlon-XP), but Intel have been stubbornly refusing to release a desktop version of it, because in order to do so they would have to admit that AMD has been right all along about the mhz myth.

    With this announcment, it looks like they're finally giving in and doing the sensible thing.
  • by Elladan ( 17598 ) on Saturday March 13, 2004 @09:50PM (#8557263)
    2) The anti-Mhz myth. That Mhz don't mean anything. This is just FALSE. When you compare a single architecture (meaning one kind of one brand of processor) mhz give a VERY good idea of how performance will scale. If something gets X on a processor at 500mhz, you can with confidence say it will get nearly 2*X with the same kind of processor at 1000mhz. That doesn't mean it's the be-all, end-all benchmark, just a useful (and truthful) was of evaluating chip performance within a line.

    Except, of course, that this isn't true either. True, mhz means something, but it's not even a good indicator within a processor line.

    A 1000mhz processor will only be twice as fast as a 500mhz processor if the ram and the peripherals are ALSO twice as fast. Otherwise, it depends entirely in the workload whether the processor is faster. If your computer is basically just loading data from disk, copying it from one place to another with a simple transform, and sending it to the network or something similar, the 1000mhz processor may not be faster at all with the same ram! In fact, it could even be slower, if to get the right multiplier for the CPU, the front side bus speed was actually reduced (that does happen quite often) and hence the ram runs slower!

    On the other hand, if your computer simply runs a tiny program (a few k) that fits entirely in the L1 cache, and almost never talks to main ram or the peripherals, then it may in fact run twice as fast when you double the clock speed.

    In reality, real programs are somewhere in between, so to figure out whether it's worth it to get a faster processor or eg. buy more ram instead, or faster ram, or a 15krpm SCSI disk, or whatnot, you have to figure out what your computer is going to be doing and estimate accordingly. Or even better, test the actual machine out to see how fast it is before you buy a lot of them.

  • by Bastian ( 66383 ) on Saturday March 13, 2004 @09:53PM (#8557314)
    If something gets X on a processor at 500mhz, you can with confidence say it will get nearly 2*X with the same kind of processor at 1000mhz.

    This is true if your benchmark (or something) is able to effectively isolate the CPU. Otherwise, you have to start worrying about bus latency, page faults, and the speed of everything else in your computer.

    There's also a myth that CPU performance equates to the performance of an entire computer. This one has folks going out and buying all-new computers when what they really needed to do was buy more RAM or uninstall RealPlayer, Gator, that weather program, etc.

    This myth is definitely supported by Intel, which likes to run ads that imply that buying a Pentium MCCXVI processor will help you get better audio and video streams on that computer that's still dialing into AOL with a 28.8 modem.
  • by Sycraft-fu ( 314770 ) on Saturday March 13, 2004 @10:25PM (#8557941)
    Not system speed. Believe it or not there are plenty of CPU intensive applications that don't hit much of the rest of the system. Also, there are plenty of cases (like the case I'm in now) where the CPU is the limiting factor. My disks are plenty fast for what I do, almost nothing slams my memory bus, all my other system and IO busses aren't even close to peaked. Any time I slam my system it's either the graphics card or the CPU that is the limiting factor. For the work slamming the CPU, I will get basically 150% performance by increasing CPU speed to 150%.

    Ya, it's not the be-all, end-all number. I noted that. The problem is that there is the thinking that somehow a BSified PR number will somehow be better. Errr, no. I'd prefer that all my components be rated in real, factual, terms. I can then use those to make SOME kind of meaningful comparison. I want to buy a 7200rpm harddrive, not a PR 12000+ harddrive. I want to buy 1024MB of RAM, not PR 3500+ of RAM.

    Going to BS PR numbers improves NOTHING. You are still faced with the situation of picking which part you need to improve, only now, it's difficult to make any kind of sensible comparison.
  • well... (Score:5, Insightful)

    by Cynikal ( 513328 ) on Saturday March 13, 2004 @10:53PM (#8558496) Homepage
    i'll agree with everyone here about mhz not really meaning a whole lot by itself..

    whenever i had to consult people about their pc purchases, i found the best way that they understood was basically the 3 parts of the cpu.. mhz, bus speed, and cache memory..

    your cpu is a vehicle.. the mhz is the speed the vehicle can carry stuff from one place to another (this is what you are buying this ehicle to do - moving stuff) the bus speed is how fast you can load your stuff onto your vehicle.. and the cache memory is the amount of stuff the vehicle can carry...

    then i go to explain how whats the point in having vehicle A that can go 1.5 times faster than vehicle B, but vehicle B can carry twice as much stuff each trip.. in the end Vehicle B is the one that gets more done.. until you get into things like it doesnt matter how fast vehicle A can go, if vehicle B can be loaded and on its way and back in the same time that A is still being loaded (bus speed)

    its probly not the most refined explaination, but its the way i've talked many people into getting athelons instead of celerons, and in the end getting a better computer (dunno about the states but up here i can get an XP2200 for about the same price as a celeron 2ghz -give or take $5- and we're talking HUGE difference in performance)
  • Re:Payback (Score:1, Insightful)

    by hak1du ( 761835 ) on Saturday March 13, 2004 @10:54PM (#8558523) Journal
    What really matters is real power [apple.com].

    Apple isn't developing the PowerPC, IBM is. So, if anybody matters in the non-x86 CPU game, it's IBM. Apple is basically just an upscale systems integrator.

    This will translate into more sales as Apple is now finding out with significant interest in the G5 Xserve from a large number of corporations and government agencies.

    Maybe the interest appears large by Apple standards, but in the market overall, Apple's Xserve and G5-based machines are niche machines and they don't really offer compelling performance advantages--high-end Opteron and P4 system have similar SPECmarks at similar prices. And OS X is severely handicapped in the market relative to Linux and Windows--OS X just isn't used very widely as a server operating system.

    So, if Intel can get around some of the performance bottlenecks and deal with the loss of backwards compatibility, they may be able to get back on track.

    Intel did miscalculate with Itanium. But the threat to Intel is AMD, not PPC.
  • Re:Pentium M (Score:4, Insightful)

    by Decimal ( 154606 ) on Saturday March 13, 2004 @11:45PM (#8559156) Homepage Journal
    A friend recently told me he had bought a new 3Ghz Athlon XP, he was ready to take it back to the shop after I explained what the 3000 meant!

    I hope you also explained that he got the same, if not more, power as an Intel P4 3GHz, for a cheaper price. It would be silly to educate people about what AMD ratings are not, without explaining what they really are.
  • Re:Problem.. (Score:4, Insightful)

    by randomdef ( 663725 ) on Saturday March 13, 2004 @11:54PM (#8559188)
    so what? like i have any idea, from name alone, which is better, an ati radeon 9800 or a nvidia 5900?
  • Re:Payback (Score:3, Insightful)

    by Graff ( 532189 ) on Sunday March 14, 2004 @12:53AM (#8559436)
    If you gave me a big shovel and gave 30 people spoons that equalled the size of my shovel, who's to say we wouldn't have the job done in the same amount of time? We'd just have lots of little spoonfuls instead of a few big shovel fulls.

    Right and for sand the teaspoons might be more efficient because less sand slips off them but for dirt the shovel might be better.

    That's the whole point, it's not how quickly the processor cycles or even how much the processor does in one instruction. Rather, it's how well the processor works for some common tasks. In order to totally judge several processors you first have to test them in several different ways and then you can say, "In general, processor X is good for modeling climate because it handles floating points well and processor Y is good for image processing because it handles integers well."

    This means that often there will be no one clear winner in a processor comparison and it may just come down to what you need the chip for and how well you understand how to use it. Right now, however, you have Intel pushing the idea that a high clock-rate processor is all that matters. This is misleading because most of the high-clockrate processors achieve this kind of performance by taking the risk of branch mispredictions and also by taking multiple cycles per instruction. These sort of things have an extreme negative effect on performance so much of the clock speed is wasted.
  • Re:Scalability (Score:3, Insightful)

    by drew ( 2081 ) on Sunday March 14, 2004 @01:09AM (#8559489) Homepage
    The transition to PPC that the parent post is talking about has nothing to do with G5, or G-anything, and it happened about 10 years ago or so..... He's refering to Apple's switch from the Motorola 68K CPU's to the IBM/Motorola PowerPC chips which happened IIRC in the early 90's. At that point having more than one processor in a desktop or even small server machine was little more than a pipedream, and scalability of number of processors meant nothing to ~95% of the computing world.
  • by $calar ( 590356 ) on Sunday March 14, 2004 @01:11AM (#8559497) Journal
    . . . don't trust benchmarks. This naming scheme is just going to create yet another benchmark which will probably be biased by those marketing it. Again, stick to Tom's Hardware and don't even look at what they call it.
  • Re:The problem is (Score:3, Insightful)

    by kryptkpr ( 180196 ) on Sunday March 14, 2004 @01:34AM (#8559580) Homepage
    Consumers will be dumb about ratings, this is true of ANY industry (horse power in autos for example). That doesn't mean that companies should just start making shit up up

    Should they? No.

    Will they? Inevitably, yes. It sells more product.

    Horse Power in cars is one example, but I think a better is home stereo systems. Things have been getting better lately because the industry has started to regulate itself, but it's still not uncommon to see 2000 WATTS in huge letters on a boombox that may be able to pump out 50. The worst example of this I've seen are a pair of $15 computers speakers labelled 1000W. They just take the largest Voltage they can pump through the speakers, and the largest Current that it can handle, multiply them together, and write this number on the box. Nevermind the fact that the max voltage and max current either a) can't actually happen at the same time (as in the 1000W) or b) can only be sustained for milli- or micro-seconds in a laboratory enviroment, while playing a perfect sine wave.

    But just as these stereo systems have the bullshit P.M.P.O. ratings, there is always, somewhere on the box, a true RMS value as well. Likewise, even though an AMD processor is labelled 2400+ it still says that it's 2.0Ghz @ 266 DDR. Engine manuals state not only horsepower, but torque, maximum RPM, etc, etc... This is for those of us in the know who use these real, informative values to decide what to buy.

    As to your example, yes the P4 8000 -does- mean something. It means the CPU is running at 4Ghz (/2). The point is that these bullshit P.R. numbers will always translate to, or be accompanied by, real values.. and if they're not, vote with your wallet, and don't buy from that manufacturer.
  • Re:Payback (Score:2, Insightful)

    by Fnkmaster ( 89084 ) on Sunday March 14, 2004 @02:06AM (#8559684)
    No, slippery slope arguments are logical fallacies. This is an observation about numerics and marketing. If both Intel and AMD have decoupled their processor speed ratings from MHz ratings, there is essentially nothing to stop inflation of numbers by both parties when it suits their marketing needs.


    Unlike a "slippery slope" argument, I am not starting from the proposal of a small but reasonable compromise or exception to the rules and then concluding that all the rules might be thrown out next. I am starting with the proposal that "the rules" (in the context of our discussion) are being thrown out and simply observing the likely outcome when the motivations of the involved parties are taken into account.

  • Re:Problem.. (Score:5, Insightful)

    by llefler ( 184847 ) on Sunday March 14, 2004 @02:30AM (#8559763)
    Nope, even with Intel the MHz is irrelevant.

    A 2g Celeron performs as fast a a 2g P4, right?

    I think this train left the station a long time ago.
  • by hak1du ( 761835 ) on Sunday March 14, 2004 @03:42AM (#8559976) Journal
    just think about where the computer industry would be without Apple to do the R&D?

    Let's look at some of your claims:


    3) GUI with the Lisa,

    Xerox PARC did the R&D for modern GUIs. The Lisa was Apple's first attempt to copy the Xerox PARC GUI work, and it failed. Then, Apple tried again with Macintosh, and by cutting a lot of corners made the system cheap enough to make it a success.

    7) First to develop the laser printer and postscript printing with the Laserwriter,

    The laser printer was developed at Xerox PARC. Postscript was developed at Adobe, based on a more complicated PDL developed at Xerox PARC. Apple just happened to create a successful product based on those technologies.

    8) First to develop the PDA with the Newton,

    The Psion predates the Apple Newton by nearly a decade, and I think it wasn't the first PDA either.

    9) First to develop the laptop form factor as we know it with the Powerbook,

    Not even close; you can find the history of the laptop [about.com] here. In fact, the idea goes back to Alan Kay's work on Dynapad--late 1960's or early 1970's.

    11) First speech technology with the Apple ][,

    The Apple II was irrelevant to speech recognition research and development.

    14) First company to ship a consumer digital camera with the Quicktake,

    Not even close. [torun.pl]


    You other examples either refer to system integration issues (e.g., supposed first use of a 3 1/2" floppy--developed by Sony), or are vague and meaningless from a technological point of view.

    For a few years, Apple had an R&D department that actually published a little and was fairly high quality. However, I can't think of any fundamental breakthroughs that came out of that, and they disappeared again in the mid-1990's.

    In addition to demonstrating your ignorance, I find your posting just offensive: I actually know some of the people who developed the technologies you talk about and I assure you that they didn't work at Apple when they did it. For their own financial gain, Apple has deliberately created the impression that they invented a lot of things that they didn't invent at all--and you fell for that dishonest marketing. Read up on the history of computing--you'll be surprised what you find.
  • by Anonymous Coward on Sunday March 14, 2004 @04:15AM (#8560058)
    Maybe it's time companies realize that designs forced by marketing are mostly bad in the long run even if they generate profit in the short term. Sound design == long product life (mostly).
  • by Hoser McMoose ( 202552 ) on Sunday March 14, 2004 @06:08AM (#8560335)

    The cost of software is a rather small part of the cost for a TPC score. Even on the "cheap" systems (the cheapest system on that top-10 lists costs $32,772, and most cost about $50,000), hard disks are the dominant cost factor.

    Perhaps an interesting flip-side to this argument is to look at the list of fastest systems overall [tpc.org].

    Linux fanboys will be happy to know that their OS powers the most powerful system in this test (albeit through the use of a cluster while a known-weakness of the TPC-C test is that clusters can produce somewhat unrealisticly good results), while MS only appears in 3 of the top-10 systems. IBM's AIX is the most common operating system (4 systems) while Oracle is the most common database (also 4 entries). Linux fanboys may actually have good reason to show off this first-place result though, because with a system cost of $6.5M, HP almost certainly wasn't using the free OS for any sort of price advantage. Rather it may offer a performance advantage over Microsoft or even HP's own HP-UX.

  • by GuyFawkes ( 729054 ) on Sunday March 14, 2004 @07:12AM (#8560483) Homepage Journal
    ...there sure are one hell of a lot of people placing far far far too much weight on the supposed expertise of Tom's and similar sites....

    By and large these hardware sites know absolutely fuck all about anything except advertising revenue and click thru.

    I'm sat here typing this on a P4 / 2.6 Ghz / 800 mhz fsb / a-bit box, prior to this is was a xp1900+ / a-bit box, why the switch? Intel is FAR quieter as well as representing a big jump in performance... sure, I could have gotten damn siminal performance from an overclocked xp2500+, at the expense of cpu core MTBF and at the expense of my fucking ears being assaulted by fans whining away.

    At the end of the day it makes no odds on the desktop, my cpu, like most of them, spends most of its life and 5% utilisation, and in the server only a fool would use a cpu with a lower standard of thermal management than intel.
    (I still miss my old cobalt raq2 that didn't even require a bloody CPU heatsink, much less heatsink and fan...)
  • by hak1du ( 761835 ) on Sunday March 14, 2004 @10:51AM (#8561063) Journal
    I have seen the early GUI development by PARC. MUCH more R&D was required to get that concept up and running for a machine that could serve as a "personal computer." Yes, the Lisa failed, but it was the first personal computer that had a GUI.

    The Xerox Star shipped in 1981, two years before the Lisa. It had a GUI, Ethernet, WYSIWYG editing, printed to laser printers, and was used by office workers.

    PARC "invented" the laser printer,

    Why do you put that in quotes? Unlike the stuff coming from Apple, the laser printer really was a ground breaking, new technology: a completely new approach for putting ink on paper under computer control.

    but it was Apple who heavily underwrote a new company by the name of Adobe and co-developed the laser printer for use with the personal computer.

    So, Apple financed product development based on technologies developed elsewhere.

    I'll give you that technically, but I used an early Psion in 1986 or so and it was not really a functional information manager. The Newton 120 that I owned a couple of years later was a true PDA that allowed for word processing, information management, communication for email and early Internet via modem and IR, and more.

    The Newton was basically a shrunk-down pen-based computer--nothing new there, only better product design. As for PDAs, PARCTAB was much closer to modern PDAs and predates the Newton.

    Laptop form factor!(not laptop) with palm rests in front of a full sized keyboard with trackball or (later) trackpad was the innovation there. All of the previous laptops I have owned have been awkward with keyboards up front with no place to rest your hands and no pointing device integral to the laptop.

    The Atari Stacy had an integrated pointing device in 1989, several years before the first Powerbook. The integral wrist rests on the Powerbook may have been a new design feature, but Apple itself has moved away from them and moved the keyboard forward again, with just enough room to accomodate the trackpad (which, incidentally, also was not invented by Apple).

    "The Apple II was irrelevant to speech recognition research and development" My point still stands, that the first speech synthesis was developed years before anybody else on the Apple ][.

    The Apple II was also irrelevant to speech synthesis. The history of electronic speech synthesis goes back to the 1930's. By the time Apple appeared on the scene as a company, people already had a sophisticated algorithmic understanding of how to process speech on computers. Apple made no ground-breaking contributions to speech synthesis, and they never shipped anything that was even close to state-of-the-art in either area.

    Consumer digital camera! is what I said. I remember the MavicaPro series and they were hideously expensive. The Quicktake was actually affordable by the consumer.

    Again, that's system integration. The underlying technologies (CCD, flash, DSP) were developed elsewhere and the components were produced elsewhere. Even the design came from Sony. All Apple did was to time things right and to cut enough corners to be able to ship a digital camera at a marginally acceptable price for a brief period.

    I [...] am grateful that Apple began shipping computers with CD-ROM drives in them for just this reason.

    CD-ROMs had been used as a software distribution medium by others. Contrary to what you may think, Microsoft and Apple weren't the first companies to ship bloatware--UNIX vendors had them beat by many years.

    Plug and play compatibility is something that is also a huge time saver.

    Too bad that Apple didn't invent it. NuBus came from MIT and was commercialized by TI before Apple picked it for the Macintosh II. Again, Apple's role was that of systems integrator.

    First to include built in networking is meaningless? There is this thing you are using called the Internet.........
  • by brucmack ( 572780 ) on Sunday March 14, 2004 @12:35PM (#8561649)
    It seems that almost everyone is writing as if Intel is adopting an AMD-like system, where they replace MHz with some number. This is not the case. The numbering system will be like model numbers, and the clock speed will still be there. This doesn't replace clock speed as a measurement.

    Instead, Intel's going to take something like "800 MHz FSB, 1MB L2 Cache" and make that a number. Of course the higher numbers will be those that should perform better, but that's always how it is with model numbers.

    In my opinion this can only be a good thing, because instead of having to know the difference between P4 A/B/C/E, instead there'll be a number that encapsulates the non-clock speed related statistics.

    In any case, these numbers are not intended to compare Intel chips to other manufacturers, rather to allow the different P4s running at 3.2 GHz apart (for example).
  • Re:Well... (Score:3, Insightful)

    by Loki_1929 ( 550940 ) on Monday March 15, 2004 @01:42AM (#8566026) Journal
    "Allow people to freely register to access any machine, exclusively, for 15 minutes at a time, by SSH. Allow those people to copy over their own actual software and get measurable performance on the only workload that matters, and base their product selection on that measurement."

    Great, I run Maya. Now, does my exclusive 15 minutes include the 5+ hours it's going to take to send the software to them? Also, will Intel indemnify me against the makers of Maya for any copyright infringment suits that come from my sending it to Intel in violation of the licensing? Also, do I get to custom-configure the memory, hard drive, video card, power supply, mainboard, etc in the computer to my exact specifications so as to get an accurate picture of the performance I'd see under my specific system configuration?

    It's a decent idea, but unworkable in the real world.

Progress means replacing a theory that is wrong with one more subtly wrong.

Working...