Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Intel

The Pentium IV Dissected 164

An AC pointed sent us this: "In this extremely well written and technical article, the author points out the various mistakes that Intel made with the production of the Pentium IV, the fact that Intel and other manufacturers have been misleading customers about the performance of the Pentium IV, and the amount of work that will be pushed onto software developers backs to get a piece of software to run at a reasonable speed." Beginning section readable by anybody; by the end you need to know a little more assembly language than is healthy for anyone, but excellent overall. For a Cliff's Notes version of the above, try this NYTimes article discussing the chip in non-technical terms. My guess is that most computer buyers will continue to compare only clock speeds, however.
This discussion has been archived. No new comments can be posted.

The Pentium IV Dissected

Comments Filter:
  • The author is Darek Mihocka, "President and Founder, Emulators Inc." according to the article. Their main product is a Mac emulator for the PC. The corp shares his ego: "Our Macintosh and Atari emulators are simply the fastest on the planet. Period." Slashdot featured another of his rants [slashdot.org] earlier this year. That said, the reason SoftMac is fastest is because it's written in assembly (and even some machine code!). When it comes to code execution speed, he knows what he's talking about.

    As for the P4, read the article closely. He realizes Intel is going for a brute-force, high-clock chip (compares to RISC). He admits it performs faster for some tasks, just far less efficiently. He just thinks Intel should have concentrated on better design (like AMD) instead of getting the big marketing win: a new chip with a huge clock speed. What's the point? Don't spend the big money on the P4 now since AMD has better design and will scale better beyond 1.5GHz

  • Also, if Intel really believed the P4 was its best chip, why are the colored guys on TV hyping the P3 like there's no tomorrow? (No, that's not a racist remark. If you've seen the ad, you know what I mean.)
    Blue Man Group! [blueman.com]

    They rule... very good people to represent a company like Intel....



  • I hate to lose what's left of my karma, but I need to point something out to the shortsighted SlasdotCollective.

    When the 386 came out, people like you said "dos doesn't multitask anyway, and who needs 4mb of memory?"

    When the Pentium came out, people like you said "my 486 is perfectly fine, this new pentium thing is a waste"

    When the Pentium Pro came out, people like you said "what a waste of money, PPro is not too much faster than Pentium"

    When the Pentium II for consumers came out, people like you said, "My pentium 133 is fine, whats the difference anyway"

    The first generation of most chips are not a great price/performance combo. But the 2nd and third generations of these chips will get progressively better and better, just as the Athlon has improved from the POS P2 knockoff that it originally was.

    AMD is the winner in the consumer market right now, after years and years of going nowhere. But consumer PC market is a relatively low-margin and fickle market. Intel makes their cash in the mainstream business market. Few, if any of the big-name business pc vendors even offer AMD-based pc's, since AMD cannot deliver a consistent supply of chips.

  • AMD is supposed to have a SMP chipset for the Socket A proocessors out pretty soon. Then it is up to the motherboard vendors to ship it. Once AMD has broken Intel's monopoly on dual processor systems, it should force Intel to be more price competitive with the PIII Xeon, for example. So why be so down on AMD, if it wasn't from them, your PIII SMP box would have been a lot more expensive.

  • first, you're comparing a 1.5 ghz Pentium 4 with rambus ram against a 1.2 ghz athlon thunderbird with sdr sdram when most 1.2 ghz athlons would probably be paired with ddr sdram.

    Incorrect. There is no Athlon DDR moterboard released yet, but RDR (and SDR, obviously) motherboards are plentiful. We compare what's available, not vaporware.


    also, did you notice that the pentium 4 machine had a top of the line hard drive (ibm deskstar 75gxp) and video card (geforce2gts) whereas the amd machines used an older ibm hard drive and a diamond stealth 3d pci(WTF?!!?) on the ddr machine and a western digital hd + nvidia tnt2 m64 on the sdr machine?


    All of this is irrelevant for SPEC, which is a CPU only benchmark.

    or how about the fact that all the tests were done with an intel compiler????

    Well, where's AMD's compiler then? The benchmarks are compiled with the vendor's compiler of choice. WHat the results mean is that with the best available compiler, the P4 performs much better than Athlon. With the average compile, this might not be the case, but anybody who is the least bit performance conscious is going to recompile everything.

    Then there's the system prices, I have no idea where you got these prices, but assuming all 3 systems use the same components except cpu+mb+ram, the prices would probably look like:

    The CPU prices are irrelevant; people buy systems, not CPU's. You can buy a P4 Gateway system for $2000. I have never seen a namebrand 1.2 GHz Athlon system for less than $1500 (though I haven't been shopping for them).

    so based on these figures, the p4 is OVERPRICED!

    Compared to Alpha (less than 10% more performance, at quadruple the price)?

  • by Anonymous Coward
    "Sure the Pentium 4 doesn't perform great on code not optimized for it. But neither did the 486, the Pentium, or the Pentium Pro." Yes, but each of these did run unoptimized code significantly faster than its predecessor. The P IV apparently isn't going to give you a significant increase until compilers are re-written for it, and applications re-compiled. If they ever do that. If optimizing for the P IV will slow down performance on the P III, then it doesn't make sense until more of your customers have the P IV than the P III. But why would anyone spend twice as much for a P IV when it won't speed up existing software?

    The one good thing about the P IV is that it sounds like it can be scaled to even higher clock speeds. If a 1.5GHz P IV runs like a 900MHz PIII, then a 3GHz P IV would run twice as fast. Not 3+ times as fast like you'd expect, but enough faster to make upgrading worthwhile. Once Intel hits that level, then they can start selling P IV's, and then there will be a reason for software vendors to write for it. Unless AMD scales up the Athlon equally fast...
  • Quote:

    Tricks such as "register
    renaming", "out of order execution", and "predication".
    In other words, if the programmer won't fix the code, the chip will do it for him


    First he's complaining the intel's cpus place too much demand on compilers, now he's complaining that the chip is optimizing instructions. I guess intel can't do ANYTHING right! Also, 1) afaik, there is no ISA that allows for register rename hints, 2) out of order execution is useful for doing hit-over-miss. Compilers can't predicted cache misses.


    Quote:


    The PowerPC G3 and G4 chips use much the same tricks (after all, all these silicon engineers went to the same schools and read
    the same technical papers) which is why the G3 runs faster than a similarly clocked 603 or 604 chip


    G3s are NOT faster than 603es at the same clock speed. G3s use 603 based cores.

  • You don't have to know his credentials, because he includes specific examples of common code that executes slower on the P4, and then describes the architectural features that lead to it. He backs up pretty much every claim he makes, so you are free to draw your own conclusions of the veracity of his assertions.

    If by "specific examples" you mean "handwaving", then yes, I would agree.

    On many occaisions, he fails to provide or cite any code or data to support his claims. For example, in "Why the AMD Athlon doesn't suck", he claims that "The AMD Athlon has no partial register stall" while he does not state how he determined that this is the case, either from an empirical or engineering standpoint. That screams either "This part of my thesis is not important enough to support with facts or data" or "I don't /really/ know how the AMD chip gets around this, but Tom/AMD/whomever said it does."

    And in "The Benchmarks", he states that "Running other tests using various emulators, I found that in general the Pentium 4 runs emulators such as SoftMac 2000 SLOWER in most cases than the 650 MHz Pentium III and 600 MHz AMD Athlon." Which tests and emulators? (And why do we care about emulators since the great majority of end users don't?)

    Additionally, he advocates changing /adding /increasing execution units without addressing what that might do to the cost of the chip or any of the physical effects(heat/die size/factor/power consumption).

    It is also rather annoying that he repeatedly states "CLOCK SPEED IS NOT EVERYTHING" while making comparisons such as "Pentium 4 fails to keep up with even the 600 MHz chips".

    (And he needs an editor really, really, badly.)
  • I think a lot of people need to take a chill pill. The guy's not saying Intel sucks, period. He's saying that the P4, in its present form, is not a good value for your money. That's it. That's all he's saying.

    He didn't say that the overall architecture is bad. He didn't say that the P4 will lead to bad designs in the future. He said that some of the choices for the present P4 configuration are bad and that people would be better served by spending their money elsewhere. If people buy Intel chips no matter what the actual price to value ratio is, then Intel has won and the consumer has lost.

    The author gives very good explanations of the limitations of the present incarnation of the P4. He also explains what he thinks needs to be fixed. With all those fixes, the P4, in a few years, will likely be a really good chip. The design isn't beyond repair, it's just flawed.

    I remember the 486SX clearly - and how my father was duped by the hype. The same thing's happening here. Also, if Intel really believed the P4 was its best chip, why are the colored guys on TV hyping the P3 like there's no tomorrow? (No, that's not a racist remark. If you've seen the ad, you know what I mean.)

    The bigger problem is that, even though you can get around the limitations of the P4 chip by writing a really smart compiler, the P3's and below will be around for years, so you won't necessarily be using the optimization settings in generic code. You'll likely see 'Word 2005 for the P4' and 'Word 2005 for the P3 and below', although there's nothing preventing them from being on the same DVD and the installer choosing the right version.

    If you can get past some of the strong language in the article (Intel engineers are stupid; boycott Intel; etc.) you can see that he's not anti-Intel per se. He's anti-Intel's marketing guys, who seem to be running the company at the moment. The decisions made in the present P4 incarnation have to be marketing's - no other explanation holds water. You can't design the next generation chip and then deliberateley cripple it. That's like having a son and then cutting off his foot to see how he gets along in the real world. I doubt engineers had much to say in the present P4 configuration.

    The author provides pretty convincing proof that the best value for your money is an Athlon system, right now. I haven't seen anyone here able to refute that statement. It's the same conclusion that a couple of other people have reached. From all I've read over the past few months, I have to agree.

    --
  • by SoftwareJanitor ( 15983 ) on Friday December 29, 2000 @06:49AM (#1414049)
    Another technical flaw in the article is the assertation that the processor in the original IBM PC was the 8086. This is incorrect. I've actually got an original IBM PC sitting in my basement, and it definitely has an 8088 in it. For that matter, even the IBM XT had an 8088 in it. Some early IBM PC clones and other "MS-DOS but not quite compatible" machines like the Zenith Z100 used the 8086, but IBM didn't use the 8086 until later machines like the IBM PC convertable (early attempt at a laptop) or the PS/2 Model 25 and Model 30. The reason that IBM picked the 8088 is that its pinout was designed to be more closely compatible with the Z80, which was the CPU used in IBM's early engineering prototypes for the IBM PC. Those early designs were basically formulaic 8-bit CP/M machines.

    It is also worth noting that to a certain extent, history is repeating itself. The Zilog Z80 was itself a clone of the Intel 8080. By the late 70's, Zilog, which was originally an upstart clone chip vendor, had overtaken Intel by building a better and cheaper product. Intel's follow-on to the Z80, the 8085, much like the P4 today was largely a disappointment. Intel was forced to move to 16 bit with the 8086 (and the 'ginsu' 8088) in order to grab back the market they had lost to Zilog. Intel was successful mainly because they succeded in selling the 8088 to IBM, which bailed them out. Zilog's 16 bit processor the Z8000 was a failure because it was too ambitious, and not at all compatible with their 8 bit designs, despite the fact that many people thought it was superior to Intel's 16 bit chips which were largely just warmed over 8 bit designs with larger registers.

    It remains to be seen how things will sort out now. For all intents and purposes, Intel's P3 and P4 look to be beaten technically and price/performance wise by AMD. Intel appears to be largely betting on the IA64 to win back the market, but unlike the 8 bit -> 16 bit transition, it is Intel who is betting on a totally new and mostly incompatible architecture for 64 bits rather than AMD, who appears to be charting a much more conservative extension of the basic x86 architecture to 64 bits. If AMD gets software support for their 64 bit architecture before Intel does, which may happen because it is less of a jump, or AMD is able to push 64 bit processors into lower pricepoint boxes quicker, which also seems doable, Intel could be in trouble. One other big thing will be whether the AMD architecture runs existing 32 bit x86 code faster than the Intel IA64 processors do. Since many people will be largely dependant on legacy applications, if AMD can offer the promise of 64 bit applications in the future and better performance for existing 32 bit apps, then Intel will really be hurting.

  • Ok I will agree that there is evidence that the P4 has problems however I'm not convinced that these problems are long term because of the ability of the P4 to reach 2Ghz and beyond. Be careful when a technical article shows way too much emotion. Let's face it this guy is basically saying every other paragraph "Buy AMD, boycott Intel". I read the article but basically dismissed it as the rants of an emotionally overworked person or an AMD propaganda article.
  • Okay, but Twizzlers in butter? Be reasonable... that could really mess a chip up.
  • My guess is that most computer buyers will continue to compare only clock speeds, however.

    It may seem obvious to some, but thats exactly the point. Who cares if it's shoddily produced and a poor performer, it's got two very important things going for it.

    1. It's got the fastest clock speeds out there.
    2. It's got the Intel (tm) brand name.

    The average computer user doesn't have a clue that it performs slower than a slower clocked AMD chip. They see the higher number, and assume that means it's better. Who's AMD? They don't have all those nice commercials with Blue Man Group and all, and the nice logo. Selling chips isn't really about technology as much as it is about marketing. For example, Cyrixs PR266/PR300/etc - they didn't actually run at 300mhz but they said that they performed equal to around a 300mhz processor, so they sold them as "300"'s, figuring consumers would assume that means 300mhz. That was all bs - Cyrix just couldn't keep their clockspeeds rising at the same rate as Intel, and realized that they could take advantage of the average consumers ignorance. Intel seems to be banking on that same ignorance today; I think this line sums it all up the best:

    What it boils down to is this - just like at Microsoft and just like at Apple, the marketing scumbags at Intel have prevailed and pushed sound engineering aside.

    We can't allow Intel to charge a premium for poorly performing chips, nor can we allow them to lie about their ability. The only solution is to boycott the P4 and all Intel products. Buy AMD, you'll be happy you did (I am).

  • Why can't the Intel folks actually come up with something thats a good never seen before speed. Instead of 1.4 or 1.5 GHz, anyone could get that would a good overclocking. They have millions, so why can't they come out with something to blow away everything else instead of this step by step stuff. I would like to see 2ghz or 2.5ghz.
  • Please. It's as much of a troll as the troll he's trolling.
  • Thats good - he does appear to have good credentials. I suppose the unfortunate thing is that Intel know full well that clock speed is what sells their chips. If they had a choice between a 1.2GHz chip and a 1.5GHz chip, with the former outperforming the latter, I bet they would choose the latter. However, we can't blame them too much for that - its not their fault that most people make such superficial judgements regarding their chips.

    It will be interesting to see just how the next round of AMD V Intel pans out. Will the next AMD chip have similar clock speeds to the P4? If it does not then, regardless of performance, I fear for it, because everyone except knowledgible Slashdot types buy on clockspeed basis alone. I know I used too, before I became really interested in this computing lark!

    As another respondant says, I suppose his credentials don't matter so much when he gives evidence to back up his claims. But still, if you are not a real expert, its good to know that he isn't just some quack, and quite useful to me! Thanks.

  • Quite right. The PIII was intended to remove your privacy, and the P4 is intended to ruin your media experiences.
  • The theory has NOT been proven bullshit. For example: the alpha 21264 is faster than any cisc chip. Calling a cisc cpu minus its microcode a "risc core" is just silly word play. cisc cpus have ALWAYS decoded instructions into micro-ops; that's what cisc IS! The big downside to cisc ISAs isn't just the silicon for the microcode, it's also the complexity of the ISA (eg, x86). x86 is MUCH more complicated than any risc ISA.
  • the reason SoftMac is fastest is because it's written in assembly (and even some machine code!).

    Assembly and machine code are synonyms.

  • Thanks for actually reading the article...it appears many others did not really read it, but read what they wanted to or just wanted to nitpick.

    It appears the whole article can be boiled down to these points of interest:
    • P4 chips are about as fast as as the current AMD chips are on existing software.
    • If you are going to spend big bucks for a P4 chip, don't expect to get a significant level of bang for your buck today
    • The current AMD chip sets are good values, don't be afraid to buy them (I have a AMD 900 and love it)
    • Once compilers and optimized, the P4 chips might be worthwhile, IF they ever come out
  • People keep saying that the P4 is a "different kind of beast" that is designed for pure clock speed. They argue that the P4 will attain such a high clock-speed, the inefficient architecture will not matter. The problem is, that its not certain whether the P4's clock advantage will hold. According to Intel's road map, they should have a 2GHz P4 out by the end of 2001. According to AMD's, the Athlon should be at 1.7 GHz by then. Quote from this Sharky Extreme article [sharkyextreme.com]

    "AMD is hoping that the re-worked core will bring the Athlon to at least 1.7GHz by the second half of 2001. By this time the 1.2GHz Athlon CPU on 266MHz front side bus will occupy the lowest rung on AMD's performance ladder. Once the Palomino runs out of headroom, the next horse will escape from the barn."

    If the road-maps of both companies can be followed, then Intel has a serious mess on its hands.
  • Putting on my cynical conspiracy hat:
    If Mihocka's analysis is correct, it could be interpreted as a ploy to perk up slow software and hardware sales. Right now we're in a market where people (consumers) are mostly satisfied with the performance hardware and software they have. They've gotten off the upgrade treadmill because they've found that for 90% of what they do, any PC, even older Pentiums, will perform fine provided the user has sufficient memory and video power.

    Current software runs poorly on the P4 because the design is so different from the earlier Pentium family that code optimized for those chips needs to be completely recompiled or re-written for the P4. Microsoft has the compiler, it can update the compiler, rebuild all its apps (and call them Office 2004) and tell users that they should buy a P4 and new software to have the fastest performance available today. MSFT and Intel both make boatloads more money selling stuff to people who would otherwise be happy to stick with what they have.

    Basically, if some buys a P4 and finds out that it doesn't perform well with existing software, they'll be enticed to buy upgrades from MSFT. Someone buying the latest MSFT software will be enticed to buy a P4 to get the most performance out of the software.

  • First of all the following sentance is not true at all: "Few, if any of the big-name business pc vendors even offer AMD-based pc's, since AMD cannot deliver a consistent supply of chips.". AMD has had a better record of supplying chips than Intel has in the last year, another spot where Intel is messing up. Second I'm not saying that it's not a good price/performance chip (Which it isn't anyway..now I said it), I'm saying that Intel messed up on it's design, they designed it to soley make money and not to provide a good performing CPU and that they have been messing up more and more lately. I work in the computer field and I know what's going on, especially with CPUs nowadays. I'm not just another lamer saying that it's too much cash or I don't know the difference between a K6/2, Pentium 2, and Pentium 3 or something. I've been around since my first computer, a 80286 with a 32MB hard drive.
  • Incorrect. There is no Athlon DDR moterboard released yet, but RDR (and SDR, obviously) motherboards are plentiful. We compare what's available, not vaporware.

    RDRAM motherboards for the P4 are not "plentiful" The only ones that exist are the intel boards and the asus, two motherboards does not count as being plentiful.
    And DDR motherboards are not "vaporware." Vaporware products are products that do not exist. DDR systems are available from places like Micron.

    Well, where's AMD's compiler then? The benchmarks are compiled with the vendor's compiler of choice. WHat the results mean is that with the best available compiler, the P4 performs much better than Athlon. With the average compile, this might not be the case, but anybody who is the least bit performance conscious is going to recompile everything.

    The fact that the benchmarks were done with an intel compiler shows that the results are biased toward one vendor. What if the test was done with gcc? how would the results turn out then? And as for recompiling everything to get the most performance, how are we going to get the source code to closed-source programs?

    The CPU prices are irrelevant; people buy systems, not CPU's. You can buy a P4 Gateway system for $2000. I have never seen a namebrand 1.2 GHz Athlon system for less than $1500 (though I haven't been shopping for them).

    of course the cpu prices are relevant, assuming 2 systems have the same monitor/case/video card/hard drive, everything else comes down to cpu+ram+mb prices. If a p4 system can be built for $2000, then the same system can be built for a little ore than $1000 with athlon/sdr ram. Places like gateway just happens to be selling their athlon 1.2 ghz systems for more than they're worth to make more profit whereas they probably are barely making a profit with that 1.4 ghz p4 for $2000.


    Zetetic
    Seeking; proceeding by inquiry.

    Elench
    A specious but fallacious argument; a sophism.
  • by kinnunen ( 197981 ) on Friday December 29, 2000 @07:29AM (#1414064)
    Lets say Pentium 4 is the biggest blunder Intel made.

    That is excatly what people said about the Pentium Pro when it came out and ran 16-bit code slower than the Pentium. And look what happend. The P6 arch proved to be extremly scalable, extensible, and yes, profitable. At this early point, I see no reason to assume that PentiumIV can't repeat this.

    --

  • At level >= 3 I read 5 comments. The first four are of the "isn't this just like Slashdot to spotlight an article favoring the underdog" ilk. This sort of teenage self-consciousness, "Oh, my gawd, I'm being too trendy and I know it and I can be cooler than that by displaying my self-awareness," approach -- why is it earning so many moderation points in so many of the topics in the last few months?

    The fifth highly-moderated comment is substantial: that what's bad engineering design for this point in time may actually provide a better platform on which to build high clock speed chips a year or two out. The commentor doesn't show why AMD's currently superior chips can't gain similar speed -- but at least begins a proper challenge to the paper under discussion's very credible technical analysis of why the Intel chips currently look bad.

    Historically, no on stays at the top forever; some 'underdog' always wins. Most underdogs lose, badly. But anyone who's claimed "Intel/the Roman Emperor/the British Empire/IBM/the Nazis/CBS/Rock n Roll is just too big, smart and powerful to ever yield top position" through what, afterwards, will appear shear idiocy has been wrong. We're not only surrounded by idiocy, it infiltrates us as individuals.

    But could we tone it down in our moderation, where it's now become typical /. crap to award points to whoever says "This is typical /. crap" in the most typical, /. crap way?

  • For example, Cyrixs PR266/PR300/etc - they didn't actually run at 300mhz but they said that they performed equal to around a 300mhz processor, so they sold them as "300"'s, figuring consumers would assume that means 300mhz. That was all bs - Cyrix just couldn't keep their clockspeeds rising at the same rate as Intel, and realized that they could take advantage of the average consumers ignorance.

    Amen. I was screwed by Cyrix back with my P150. It was *much* slower than a Pentium 150. When I finally upgraded, I got an Athlon, and I couldn't be happier.


    -----
  • I am very close to the chipmaking operation of Intel and there are several things I've noticed about their process.

    First of all, there is no central point of authority. The chip developers do not have a single way to add comments or say that a certain part is bad.

    Secondly, the actual Fab plant where the chips are made in Albequerque, New Mexico is continually being constructed and it is __incredibly__ dusty around there. That is probably one of the contributing factors to the incredibly dirtiness of the plant.

    Intel is a shoddy operation as far as I've seen it and I suggest to stay away until they have a bit better recent track record.

    HELO #kuro5hin
    ------------
  • He's not taking the Intel engineers to task. He's taking the Intel marketing people to task. What, you don't think the Intel engineers didn't want a larger L1 cache, more execution units, etc? Of course they did. But more silicon=higher costs so you can bet that it's the marketing guys who lopped off all that extra silicon.

    I don't think it was so much a matter of cost as a matter of time to market. More silicon==lower yields==longer development time to get a manufacturable process. Intel could have included many of the axed features in the design, but it would have resulted in an extra 6 months to a year before the product could be released. Meanwhile, the Athlon would have been wiping the floor with the Pentium III and gaining market share like crazy. Intel couldn't let AMD have both the performance crown and the MHz crown.

    The current Pentium IV is more of a stop-gap measure, just watch in a year or so intel will release a revised version with a larger cache and some fixes for the other problems mentioned in the article.

    P.S. Will the next Pentium be a P5 or a P8 (or perhaps a Sexium)?
  • by Lover's Arrival, The ( 267435 ) on Thursday December 28, 2000 @10:50PM (#1414069) Homepage
    First, I'd be extremely interested to see what this guys credentials are - its interesting to see him take the entire Intel CPU design team to task over this.

    Secondly, I thought the entire point of the Pentium IV is that it is focused on different areas to the PIII and others. Specifically, it is designed for a media rich environment, and was designed with the future in mind. I would guess (bear in mind, I don't have any credentials) that we won't see the best of the PIV until a year or two down the line, when compilers are properly optimised for it and people start programming with its architecture in mind. Until then, I fear we are making unfair comparisons. Just my guess!

  • "We have heard rants from this guy before, and although he's a little long winded, his product works as advertised" I'm sorry.. I missed his ad that said that Sound doesn't work, Printing doesn't work (though he offers a package with PowerPrint Printer drivers), he doesn't support more than 256 colors, and you will never be able to use his product with the Internet or on a Network. And it's not as if these are things one can't have in the Emulation field. Most of the competitors have one or more (in Basilisk II's case, ALL) of these missing features. His ads are misleading and advertise the opposite. His coding skills are so shoddy that the program routinely forgets the user's registration information forcing you to reinput it any time you change a parameter in the program. AND, his method of registering the product is so silly it's not funny.. Rather than letting you input your name, company and serial number like 99.9% of the products out there.. He sends you a long key code (which encodes your name and serial number) which you must type into the filename field of Import Configuration in the File Menu. It's real hokey.. I have no doubt that Darek knows a lot more about programming and Intel CPU's than I. But I would not use his companies products as a means of establishing his credentials. People in the know already have him branded as a crackpot.. Regards, Al Hartman (Macintosh Emulation List Host) http://www.topica.com/lists/MacEmuList Enlightenment means taking full responsibility for your life. - William Blake
  • Lets say Pentium 4 is the biggest blunder Intel made. What would happen next? I for one would think it would be the final nail on the coffin for 32 bit chips from Intel. And yes, Intel wont die :) They are far too huge to die. But their 32 bit plan to extend the life of the Pentium XX chip would have ended. And a very welcome end too. Once that happens, I would feel that they would focus more on their 64 bit architechture (IA64). The 64 bit architecture is a plan that would bear fruit in a significently longer time than Pentium IV would. But it would defintaly be better for consumers, competitors (AMD) and the likes. Intel has the power to eridcate 32bitness from the desktop and replace it with a pure 64 bit machine. Lets hope the Pentium IV really dies :)
  • It's been shown over and over again that forcing coders to do things a certain way doesnt work, because we're lazy.

    Faa -- The whole game of the mainstream computing market is trying to introduce something new without breaking back-compatibility. It's not the programmers that are lazy -- it's the consumers, who are still out there holding onto 8086 and 80286 and i386 software with white knuckles. (In fact, one nice thing about this /. discussion is the lack of people ragging on the x86 ISA and instead advocating MIPS or Alpha or something.) Recompiling with VC7 or something P-IV optimised *is* the lazy solution.

    Intel *has* a solution for poor speed on 'legacy' code -- it's called cranking the clock speed up to 2GHz, efficiency be damned. This is faster on legacy code than any P6 that Intel could possibly make, BTW.

    For 90% of the people for which that isn't a good enough solution, they can recompile. The other 10% is either out of luck or a Quake player that just has a bug up his ass that his shiny new 1.5Ghz chip isn't running at maximum efficency.

    The author of the article makes his technical points, but the guy is a crank. See his previous Slashdot appearance complaining about Apple and other Mac software vendors dropping O40 support, for example (thus making his handcrafted emulators useless, even though Mac users were more than happy to leave 68K behind.)
  • I urge all computer consumers to BOYCOTT THE PENTIUM 4 and BOYCOTT ALL INTEL PRODUCTS until such time as Intel redesigns their chips to work as advertised... Sheesh. Looks like someone's got a chip on their shoulder. (Sorry.)
  • Seems to be an eternal issue with Intel : people claim their newest CPU is crap, buy it anyway and after several months just say to anybody they meet it is the best ever. I personnally believe the P4 might have some design flaws but it also might be interesting in specific applications. I especially think that Intel has not (co-)developped Xscale for nuts and this is from where the actual future might come.
    --
  • the article, however biased, brings out an interesting issue - intel is trying to simultaneously improve performance of legacy code, and introduce features that will really boost properly recompiled new code. however, it's not clear that you can kill these two birds with one stone - it really takes a year or two until compilers get updated to support the architectural tweaks.

    which makes one wonder whether the real problem might be not the processors, but the compilers. by that i mean, the traditional c compiler doesn't really have enough information to know when to apply what optimization. consider traditional tight loop over a region of memory - c lets you implement it as a straightforward for loop on array references, an "increment pointer and dereference, until pointer reaches some value" loop, and so on.

    now the problem is, the compiler doesn't have any sort of an idea of what we're trying to do. had it known we're performing bitmap manipulation with multiplication over an integer buffer, it might be able to partially unroll the loop to fill in the pipeline, or automatically insert MMX code when it sees its appropriate - same with the parallel-floating-point-op instruction sets. but since the c compiler doesn't know what data it's moving around, or what the user is really trying to do on a macro level, it doesn't know any better than to produce a pretty much straight translation of the c code.

    this way performance suffers because the compiler isn't smart enough to automatically support the subtle features of those new processors, and that in turn can be traced back to the languages such as c not retaining enough information about what is being computed to support such automatization.

    does this mean we may finally start seeing a move to higher-level languages, when low-level ones fail to compile as optimally as they ought to? i hope so. but considering how much c code there's still floating around out there, i won't hold my breath.

  • Well, I'm not necessarily going to boycott Intel and the Pentium 4 for the bugs it has (although that is good enough reason), but for the price tag anything Intel has on it.

    I'm close to being in the market for a new computer (couple more months of paying off bills and I should have the cash to start), and I was comparing prices. Even if the Pentium 3 wasn't such a dog, I wouldn't get it. There are much cheaper alternatives. That and I was absolutely amazed at how cheap hard drives are now. (Sue me, I haven't comparison shopped computers in three years.)

    I'm probably going to go with an AMD Athlon. And not because of this article. With the money I save on that, I can get a bigger hard drive, a scanner and some other neat toys.

    Just my 2 shekels.

    Kierthos
  • Yes, I agree totally, The Pentium 4 is a sign that Intel is now totally out of touch with reality. They have no idea of what the consumer wants anymore. How many more times do we have to hear tired cliches about the new processor "enhancing your Internet experience" or whatever. AMD have the real innovation, folks, and they are much cheaper too. I have supported Intel in the past, but my next processor is sure to be an AMD.
  • This benchtest with the Intel optimised version of the Flask was mentioned in the original article. Try and RTFA before posting!
    "Give the anarchist a cigarette"
  • hehe, yeah, but it's funny in a sarcastic sorta way :)
  • So you want the P4 to die and be replaced by a IA64? Well, I do hope that the 64bit goodness will be worth the lost Mhz.
    I know, Mhz is not everything but if you look closely at the performance difference of lets say an Athlon (highest possible clock) and an Alpha (highest clock as well) then you won't see that much of a difference except in the price.

    What I mean is that the P4 is going to be clocked over 2Ghz very soon and will probably reach 4Ghz in the next two years. At that time, IA64 will still be trying to reach 1Ghz. IA64 is the biggest flop of Intel so far so I am not too eager to put my hand on it. Besides, the only reason to want 64 bit is to be able to access 2Gb of address space per process. I don't really see this as being necessary right now...

  • The problem is that they will simply stop making the PIII and you will be forced to get a P4 (if you want an intel chip). Seeing as how Dell for example only sells intel machines if your company had standardised on Dell computers you will be using the p4 like it or not. Since most business don't give a flying donut about streaming media performance (you don't need that for a spreadsheet do you?) they will end up getting the short end of the stick. This is what happened when the PIII came out. If you wanted a PII from Dell you were out of luck.
    My hope is that AMD takes this opportunity to make a name for themselves and convince the Dells of the world to sell computers with AMD chips too. Unfortunately AMD seems to have a knack for blowing opportunities.
  • I wouldn't blame it on AMD. I'd blame it on the nature of an exponential/logarithmic yield curve. The PIV is a big mother of a chip right now, and would have been nearly unproduceable had they included all the stuff they originally planned for.

    I suspect the PIV is a chip waiting for a process shrink, then you will see what it is all about. Remember the ugly, nasty Pentium 60MHz in 0.6 micron? Remember how much less heat a P66 in 0.5 put out? Remember that 0.5 micron chips later hit 90/100 MHz? That will happen here too. The Intel forecast of 2GHz by 4Q01 is probably too conservative, or they are playing coy.

    I think the article does have a couple of interesting points though -- Intel relied on the trace cache too much, and probably didn't notice how much of a bottleneck it would be for their execution units if they could only issue 3 instructions from the trace cache (I suspect that some design parameter got changed late in the game, because that is a pretty bad mistake that I would expect to be noticed). However, I also think the author didn't realize how hard it is to run any SRAM-type structure at 1.5GHz, especially to scale it up to bigger sizes (his entire rant about 8K versus 64K I found pretty humorous).

    The other humorous thing in the article is the comparison of cycle counts. The author spends lots of bold tags on making sure we know that MHz is not the only thing, but then looks at cycle counts. Well, bub, they are representing the same thing. For a given architecture one must consider the quotient of the clock cycle counts and the clock frequency to get a realistic measure of performance. You can implement a given chip with lots of short-fast pipeline stages or fewer, bigger stages. One approach is not "better" than the other -- it's dependent on process technology and what sample set of code you use to benchmark it on. Therefore, like most aspects of processor design, it's a tradeoff.

    On thing he does get very right: I certainly wouldn't buy a PIV right now. But, I think that in 12 months everything will look OK. I also don't think that a "bad" (i.e. a little slow) first version of a chip is a reason to discount an entire architectural implementation.

  • The fact that the benchmarks were done with an intel compiler shows that the results are biased toward one vendor. What if the test was done with gcc? how would the results turn out then? And as for recompiling everything to get the most performance, how are we going to get the source code to closed-source programs?

    That's not the point. The point is, using the Intel compiler on a P4 is faster than using ANY compiler with the Athlon. Thus, if you wanted a system with the fastest possible performance, you would use the combination of the P4 processor and the Intel compiler. The P4 with GCC, or an Athlon, would be an inferior choice (for performance).


    of course the cpu prices are relevant, assuming 2 systems have the same monitor/case/video card/hard drive, everything else comes down to cpu+ram+mb prices. If a p4 system can be built for $2000, then the same system can be built for a little ore than $1000 with athlon/sdr ram. Places like gateway just happens to be selling their athlon 1.2 ghz systems for more than they're worth to make more profit whereas they probably are barely making a profit with that 1.4 ghz p4 for $2000.


    Typically, only complete system prices are compared. It may be true that a P4 is double the price of an Athlon, but that's comparing the CPU itself. But the CPU is a small part of the system cost, so a computer using Athlon costs more than half for a comparably equipped system. To compare CPU prices is to magnify their actual effect.

    Comparing CPU prices is to repeat Transmeta's fallacy (who claimed that a CPU with half the power consumption would speed up battery life of the system, when in fact, the CPU was not even the main power hog in a system)

  • Yeah, there were statistics and such -- he just didn't apply any of them when it didn't suit his anti-Intel rant. For one good example: he faults the P4 for failing to scale well with core speed while running Prime95; yet the 600 and 900MHz Athlons scored essentially identically, so the same charge could be made against the Athlon. But he completely fails to even notice this...

    He put up a good facade, but in reality his article wasn't a decent analysis -- it did have its good points, but there was so much BS in there that it was hardly worth the effort.

    ---

  • So my guess is that they were planning to bring the P IV out in 2002, but the Athlon + inability to crank up the P III as far as they expected unexpectedly put them in second place as far as speed goes....

    ...So they rushed a half-finished design out to production...


    I agree, it seems that the PIV is really a half finished product that was rushed to market to prevent a massive loss of market share.

    Engineers and geeks know that will only make things worse, but marketing and many other management pukes were raised on "The Little Engine that Could." They think that if you just try hard enough the impossible will happen.

    I disagree. Sure, the PIV is not an Athlon killer in terms of performance, but there are plenty of clueless pointy-haired bosses and Joe six-packs out there that will never know that they shipped out a truckload of cash for only a marginal improvement in performance. After all most of their applications won't need that kind of performance anyway.

    Grabbing the MHz crown (if not the actual performance crown) is a stroke of marketing genius. It buys Intel a little time to come out with an improved PIV that really is an Athlon killer.
  • The P4 is intended to be a real fast desktop chip. The lack of SMP options and the funky motherboard is intentional to keep it there so that IA-64 can move into server space. (Remember when a 4-way Pentium Pro board was a 'commodity' item. No More.)

    As for AMD -- they are the one milking i386. Sledgehammer will be the most kick-ass 64-bit chip to ever run 16-bit code on Windows ME.
  • OH MY GHOD! Machine Code!? Wowie-gee! Praytell how is this faster than assembly? I've been an x86 assembly programmer for over 10 years and also program in several embedded processor varieties. I'd love to meet the guy who can properly optimize* P6 code better than a halfway decent compiler. I mean the guy must have a brain the size of ENIAC.

    Reference was made several times in the article about the POOR optimizations of the compiler (he kept mentioning Microsoft compilers). Seems Microsoft (according to the article) lags about 3 years behind in getting their compilers to optimize for the newest processors. So, I suppose, the answer is that the compilers are not "halfway decent".

  • And imaging processing in publishing and the music and video industries seem to be doing just fine.

    I'd guess you have never tried to push a video clip through a Sorensen CODEC.

  • a cpu that costs over $1000 in a $2000 system is not a small portion of the system price dammit.


    Zetetic
    Seeking; proceeding by inquiry.

    Elench
    A specious but fallacious argument; a sophism.
  • Assembly and machine code are synonyms.

    No they ain't.

    Modern assemblers (macro-assemblers) do memory allocation of variables for you, as well as subroutine calling. If you care about data alignment in some arcane subroutine or some weird speed up when calling a subroutine (such as leaving data on registers) then you need to write directly in machine code.

  • I have a tendancy to consider the x86 line to be similar to a bike as an analogy. What would you have: a bike with training wheels to a bike without, to one with 6 gears, 12, 18, a motor, a motorcycle, a bike with afterburners, and finally now a bike with warp engines. It does get faster, yes, but it's *still* a bike. Or you can get a car. =)
  • I did not see any significant technical error in the article, unless one counts the author's strangly negative views that he seems to believe are dirty secrets, when in fact they are well-known and accepted trade-offs. The article also was not written as well as a static Web page on a professional site should have been, particularly concerning grammar and wording errors. Most of the language errors were minor, but there was one mistake that changed the meaning of the sentence. I believe that the line that reads,

    "Compare this to the 8086 and 80286 whose 16-bit instructions could only use certain INSTRUCTIONS for certain operations"

    should instead read,

    "Compare this to the 8086 and 80286 whose 16-bit instructions could only use certain REGISTERS for certain operations."

    (I emphasized the word that should be changed by capitalizing it.)

    I very much appreciated the information and insights that the author provided in his article.

  • Intel has no idea of what the consumer wants? Heck, the consumers don't know what they want either! The consumer doesn't care how deep the pipeline is, how much L2 cache there is, or how many new instructions have been added. Only us geeks care about that stuff. Joe Consumer wants to know "How many megahurts does it have, and how much does it cost" or maybe "How many em-pee-threes can I put on it". As long as consumers have that attitude, and those ridiculous blue guys keep up the work they inherited from the bunny people, Intel will have no problem selling whatever junk they market.
  • The guy's not saying Intel sucks, period.
    I dunno, the whole "BOYCOT INTEL AND ALL IT'S PRODUCTS" seemed a little biased to me

    In my opinion, this guy obviously has no clue about trade offs in chip design, and needs to get off his soapbox and read more before making such moronic articles. Making yourself look stupid is never a good thing. But I guess such a pompous ass as this really thinks that because he can write some assembler and do some timings that he can take on all of Intel's chip designers.
  • I'm reading a lot of ridiculous comments, most of them centered around how the PIV handles current code. Fine - don't buy one yet.
    But you're neglecting what appears to be Intel's strategy. For now, push the PIII. Probably until mid next year. By that time, the compilers will be ready and PIV will probably be at 2.0Ghz. Considering the fact that with optimized code the PIV outperforms anything out there (yes, including AMD) in many benchmarks _now_, I'd say unless AMD comes up with something fast they'll be WAY behind the performance curve by then.
    The shrink to .13 will help tremendously, as well.
  • Less biased and more accurate articles about the Pentium IV can be found at:
    www.tomshardware.com
    www.anandtech.com
  • by cperciva ( 102828 ) on Friday December 29, 2000 @12:22AM (#1414097) Homepage
    In this extremely well written and technical article...

    Yeah, right. Ok, lets address stuff in order:

    1. Prime95. Prime95 right now is optimized for current processors. The author received a Pentium 4 system a couple weeks ago, and is rewriting his code right now. When the reoptimization is completed, expect a factor of two improvement.

    2. Small L1 cache. The author seems to believe that a larger L1 cache is always good. What he fails to address is that larger caches are inherently slower, and going from a 3 cycle 16KB cache to a 2 cycle 8KB cache improves performance, given a fast L2 cache.

    3. No L3 cache. Sure this would have been nice -- but also expensive. Given the intelligence of the i850 chipset (including memory look-ahead reads) and the bandwidth of RDRAM, it isn't really necessary.

    4. Instruction decode. Hello? Anyone home? At most 1% of instructions will have to be decoded. That's the point of the trace cache. And yes, Virginia, that cache is large enough.

    5. Slow rotates and shifts. That's the price you have to pay if you want a fast clock. Variable shifts are algorithmically expensive (in fact, within a factor of log log N of multiplies, but that's a different matter).

    6. Etc. I could go on point by point, but the pattern remains. The author clearly doesn't understand the tradeoffs necessary when designing processors, and looks at one side without considering what it is being traded for.

    My opinion is that the Pentium 4 is a very well designed processor. Not only did the designers build a processor which can be run at high speeds, they allowed themselves room to add improvements later without requiring a lengthy redesign of the entire processor. High clock speeds mean that signal flight time is a problem? That's why there are two cycles dedicated to moving data across the processor. Got extra silicon? Double the number of SSE units to allow SSE instructions to complete in half the time. Decide that you want an L3 cache? Throw one on.

    Sure the Pentium 4 doesn't perform great on code not optimized for it. But neither did the 486, the Pentium, or the Pentium Pro. And which would you prefer to have right now, a 250MHz 386, or a 1GHz Pentium III?
  • I agree with the other poster. I dont really believe in boycotts. I pretty much think they are silly. That being said I like to think I'm a pretty ardent capitalist, and it to work properly people need to find the best value for thier buck and cleary the Pentium 4 isnt it. The PIV has 3 strikes against it, its expensive, it doesnt really outperform the Tbird and when it does its not by much, and it uses the ridiculously expensive crackhead ram known as RAMBUS. Personally I think Intel should have aborted the PIV like they did the system on a chip thing and concentrated on improving the PIII which they are, but seriously the PIV is a big waste of resources.
  • By the way the people who would boycott this are those that would least need to, we would be the people educated enough to know not to buy it.
  • In what ways is the Tbird lacking in performance? It beats the PIII or atleast equals it in most areas, and has a bad ass price to performance ratio.
  • by ZanshinWedge ( 193324 ) on Friday December 29, 2000 @12:35AM (#1414101)
    The Pentium IV was designed and conceived to be somewhat of a different beast than the current cpu lineup. Not as different as IA-64 and all that. The current processor designs are reaching their limits in terms of speed and won't be able to go much faster than where they are now. The P4 is a different beast. It is primarily designed to simply allow for monstrously high clock frequencies. Now, making such a switch of technology and design is always difficult and combined with some of Intel's other problems (like the stupid contract with rambus, which is really hurting them badly all around) makes for a a rough road. This first generation of P4's quite plainly isn't much higher clocked than other chips (at most 50%) and even then it doesn't stack up well, partly due to chipset and memory problems though, and it is a huge chip (physically) which costs money. Combined with the normal amount of bugs and blunders in a new product and you get less than stellar performance.

    However, that's not the whole story. Intel has always introduced new chips, tweaked them, put production in gear, lowered the cost, then inundated the public with high quality, high performance, low cost processors. I doubt the P4 will be much difference. With the process change (to 0.13 micron I believe) for the P4 comes, combined with the normal bug fixes, combined with better memory support (such as DDR SDRAM), combined with much higher clock speeds (we're talking over 2 GHz), combined with major production volumes and lower prices, the result will be a screaming fast processor that will be hard to beat. The P4's main advantage (and essentially it's entire raison d'etre) is that it has a whopping 20 stage pipeline. That means one thing, you can shove gigahertz down it's throat like you can't do to any other processor. Sure the P4 may not be as "tight" and efficient as some of the other processors out now (which is why it's foolish to be an early adopter), but what it lacks in effectiveness it will eventually make up for in raw cycles. Right now (with all of the P4's flaws, including those that can be fixed, mind you) the P4 runs at maybe 80% of what the idealized speed of a PIII or Athlon would be at the same clock speed, but they expect the P4 to hit 2GHz by Q3 '01 which means you need around a 1.6 GHz proc. of the old style to keep up with it. And this assumes that some of the weak points of the P4 (most importantly, the horrendous memory system forced on it by the Rambus contract) remain, which won't be the case.

    I'm not saying the P4 will blow everything out of the water next year (it won't), but it will be fully mature and it will be leading the pack and will be very difficult to compete with.

  • Comment removed based on user account deletion
  • by Barbarian ( 9467 ) on Friday December 29, 2000 @12:38AM (#1414103)
    1. Prime95. Prime95 right now is optimized for current processors. The author received a Pentium 4 system a couple weeks ago, and is rewriting his code right now. When the reoptimization is completed, expect a factor of two improvement.


    I'd contend that it's a fair comparison with what AMD had to put up with -- FPU benchmarks intended for two FPU pipeline chips on a three FPU pipeline system (Athlon). Were benchmarks rewritten right away? No.

  • Let's try something not so basic. This _does_ require that you buy a lot of whatever.

    >>If you buy both Sega's and Sony's systems, both companies get what they want: your money. Sega has no reason to improve its products because you already bought one; and Sony has also no reason to improve because you also bought one of theirs as well.
    You buy more of the one that seems better at the time and less of the one that seems poorer at the time.

    >>If consumers don't discriminate between quality and non-quality goods (or cheap and non-cheap goods), then no competitive situation exists.
    This requires the ability to discriminate, which is not at all simple. Even after a long time of running Windows, Linux, and FreeBSD side-by-side the evidence is essentially anecdotal.

    >>In an ideal world, people wouldn't have any consumer loyalty at all -- they'd always vote with their money and buy whatever product is the best product available.
    If it's a winner-take-all situation, the competition disappears. In particular, if it takes a second or third look to determine which is actually better, the situation is rather more complex. Competition can exist when most consumers are brand loyal if there is a vocal minority in the middle with very little brand loyalty.
  • Assembly is essentially [label] op operand with everything symbolic. Machine code is essentially executable data laid out in hex, octal, or binary as the case may be. Assembly code can be very close to machine code, or particularly with macro assemblers, very removed from machine code.
  • by Paradise_Pete ( 95412 ) on Friday December 29, 2000 @09:14AM (#1414106)
    Which is exactly why Intel wrote their own optimizing compiler.

    And once apps are optimized for the P4, every Joe Casual User will have to buy one to get decent performance. I'm sure that's just a coincidence, though.

  • Open Source is starting to look better and better.
  • Is there an AMD Dell? Compaq Deskpro? HP Vectra? IBM PC series? No.

    With the exception of the Pentium 4 and the 1.3 Ghz P3's, which are still not really out, Intel has had few shipping problems. Millions of Celerons and P3s have shipped on time. AMD has always had difficulty shipping enough chips on time at the right price. Their new Dresden plant is their only exception.
  • I only had to get about halfway through this desperate, petulant rant before I realized I could stop reading. I'm no fan of Intel's, but I recognize yellow writing when I see it. Two things threw everything else he might have to say into suspicion: First, that Intel and Dell's stock was trading down 50% due strictly to the P4. As we all know, this is not even unusual among tech companies lately. Tech stocks have dropped tremendously, and trying to use Intel's stock performance as a guage of the technical prowess of the P4 is just ludicrous. For example, try this: "AMD's stock price has dropped from a high of 48 1/2 this year to it's current price of 13 7/8, a drop of almost 70%! Obviously their poor technology is hurting them!" No way. Second, he claims the Pentium II was remarketed as the Pentium III to differentiate it from the Celeron-A, and that there was no other difference between the chips than the CPU ID. Utterly false! Game developers will be the first to point out that the P3 introduced the Katmai New Instructions ops, subsequently renamed to SSE (Streaming SIMD Extensions). These instructions are crucial to speeding up 3D vector operations, and they made a huge leap in performance possible. In the time since, transform and lighting code for games has moved on the graphics board, but those without hardware T&L remain competitive by using these instructions in their drivers. Hell, even NVidia boosted the performance of their older boards with these instructions! I'm not saying Intel has anything on AMD since AMD got 3DNow instructions out in the marketplace first, but to say the P2 and P3 were identical is just a load of horseshit. At that point I felt any specific example he threw in would be a case that he himself had trouble with, and not give any indication of where the P4 might help people doing things differently. The chip may have its flaws, but this is the definitely the wrong guy to be dissecting it.

  • There are anti-trust laws governing these console game makers. They would be happy to give the hardware away for free and sell you the games at $100 a pop.

    So you're saying it would be illegal for Sony to give away its consoles? Are you saying they can't charge $100? Just what are you saying? I think you just made that up.

  • I *hate* it when people write articles like this that are totally unwarranted.

    I have a 1.6GHz P4 computer system (prerelease, not overclocked) and a IA64 system here at work, and as *Tom's Hardware Guide* clearly points out here in their *latest* comparison:

    "Pentium 4 beats Athlon by quite a long shot. Only in a clock-for-clock comparison of Pentium 4 1.5 GHz and Athlon 1.466 GHz Athlon can reach the same scores as Pentium 4."
    That's meaning that the P4 1.5 ran at 1.7+GHz without issue while the 1.2GHz Athalon only could reach 1.466GHz.

    And *most importantly* the P4 is at the beginning of their production run, while AMD is straining their current clock speeds. 1.8 and 2.0GHz P4s will be out pre-fab within months, and AMD is stressing their line to do 1.2.

    See for yourself [tomshardware.com]

    So *please* don't flame Intel needlessly unless you have hard evidence.

    As well the IA64 architecture is *awesome*. 128 64-bit general purpose registers, an additional 128 64-bit floating point registers, and much much more. The coding that I am doing runs like 10x faster on a 666MHz IA64 than it does on a 800MHz PIII (literally!).

    I don't mean to flame, but this type of I-am-going-to-spread-biased-misinformation-because -I-like-AMD campaign really ticks me off.
  • > 2. Small L1 cache. The author seems to believe that a larger L1 cache is always good. What he fails to address is that larger caches are inherently slower, and going from a 3 cycle 16KB cache to a 2 cycle 8KB cache improves performance, given a fast L2 cache.

    You can't really make blanket statements like that, any more than what you are accusing the author of doing.

    Which is better depends on the miss rate and miss penalty, as well as the speed of the L1 cache. And of course the miss rate depends on what software you're running, as well as the size and organization of the cache.

    If you know all the variables then you can run up the numbers, but without them you can't really make too many blanket statements.

    Or you can look at benchmarks, or (best of all) you can try the systems side by side and see which really works for you, and whether the faster one is worth the extra cost, if any.

    > 4. Instruction decode. Hello? Anyone home? At most 1% of instructions will have to be decoded.

    I didn't read the article (don't do registrations, free or otherwise), but if you and the author are using standard terminology, then every instruction has to be decoded. "Decode" just means looking at the bits in the instruction and deciding what to do. Every processor has to do this on every instruction, and the fact that it's a decision process means that bits have to ripple through gates, which in turn means that time is consumed. Its complexity can indeed be a factor in a processor's speed.

    --
  • * The 8088 had an 8-bit bus, while the 8086 used a 16-bit bus. The 8088 had less pins, and was considerably cheaper.

    I repeat again: The 8086 and 8088 were both 40-pin devices!

    There was no savings of PINS or CHIP SIZE; perhaps a bit of die was saved since you didn't need bidirectional drivers on 8 more of the address lines but the chip size and pin counts were identical!

    Pinouts were different, as was menitoned several times in this thread...

  • OK, speaking as a software developer here, and someone who uses a P4 regularly...

    According to our own software, a 1.5 GHz P4 clocks in at just over a 1.1 GHz PIII. Not too bad in absolute terms, though there's no doubt the TBird kills it in price/performance, especially when the whole system price (including RAM) is considered. Still, I'm not ashamed to have one on my desk, I just don't want to be the one paying for it. Nothing new there - the Pentium Pro sucked at 16 bit software and cost far more, but it (and the P6 core) were still very successful.

    The P4 has two decent advantages - RAM bandwidth (for those who need it), and SSE2, which is finally really useful to me. I can double and sometimes even triple the performance of all my MMX code, and that easily outstrips the Athlon. This won't apply to most code, true, but it sure makes a difference to my software.

    However, 95% of all my customers don't use P4s, or even Athlons - they use dual PIIIs. 2 x 900 MHz PIII chips beats any P4 or Athlon system comfortably, and is still doesn't quite break the bank :-) This, and only this, is what has kept my customer base loyal to Intel while the Athlon has been storming the castle.

    Biggest flaw in the P4? No SMP! I still can't believe it. Their one big advantage over AMD in the higher end systems, the one they've been pushing to all their workstation customers, and the P4 WILL NOT DO IT. And now, of course, when AMD are finally on the verge of releasing their SMP chipset (can it be true?), Intel neatly snatch defeat from the jaws of victory, letting AMD through the gate, and locking themselves outside...

    Of course, there's still the Foster, AKA P4 Xeon. It will do dual, quad and 8-way systems, and this promises to be the ultimate system for my software (I use a dual Foster too, and it is nice, no question). But at what price? It's bad enough my customers having to mortgage their homes for 1 GB or 2 GB of Rambus RAM, but to have to pay Xeon-level prices for a dual system as well is going to drive them into the welcoming arms of a waiting DDR dual Athlon.

    Guess which system I'll be buying next for myself.

    Namarrgon

  • Yeah, right. Ok, lets address stuff in order:

    1. When the reoptimization is completed, expect a factor of two improvement

    Expectations are still estimates. The modified MPEG4 FlasK encoder numbers are certainly more accurate- and they aren't exactly promising.

    2. that larger caches are inherently slower

    While a larger cache isn't always good, it only has to be usually better to get more performance.

    3. Slow rotates and shifts.

    He wasn't arguing that Intel's shift/rotate unit was sub-par, but that using it in a solution for a partial register stall was a step back. Of course he uses the magic words certain and can, but this looks why he talks about slow shift/rotate as a problem.

    I agree that the PIV overall is well designed for future expansion- scalability, and the L3 cache, but these things aren't here yet. By the time we have compilers optimizing for the PIV, and the option of ordering our PIV + L3 both AMD and Intel will be pushing the next level of chips.

  • Did you actually *read* the *whole* article, or did you just read the first and last paragraphs?

    "Die size" has nothing to do with the issues he presents. If you can refute the claims he makes in the *middle* section of the article, by all means do. If not, shaddap and siddown!
  • Variable shifts are algorithmically expensive
    Excuse me? What is algorithmically difficult about a shift? A three-year-old can design a variable shift that takes one cycle. It takes a bit of silicon, but it really pays off since you need shifts all the time (math, address calculation, ...)
  • Sure, a three-year-old can make a variable shift that takes one cycle. In fact, EVERY boolean circuit can be made to run in one cycle. What most three-year-olds don't realize is how long they have to make that one cycle.

    It may not be obvious to someone who's had one semester of logic design that the speed of a boolean circuit in real silicon isn't just a function of its depth. Issues like fan-out and trying to implement the circuit on a plane, etc. end up killing you for larger circuits. A naive, two-level circuit, though it has minimal depth, isn't necessarily the fastest in real silicon.

    Consider something as simple as the parity fuction. It can be shown that a boolean circuit of constant depth implementing the parity function grows exponentially with the number of inputs. This is a big problem in that your inputs will be forced to each drive an ever increasing number of gates as the number of bits increases. At some point you have to alter the electrical characteristics of the circuit (ends up making it slower) or add drivers (ends up slower).

    Suppose instead you allow the depth of the circuit to increase. Now the number of gates you need grows linear with the number of inputs rather than as 2^n. It gets even better. Every doubling of the number of inputs only adds one more level to the depth of the circuit.

    Exercise for the reader: in what way does the arrangement of the drivers added to the first circuit resemble the arrangement of the gates of the second circuit?

    What bothers me the most is the contemtuous tone you used in replying to cperciva. He didn't deserve it.

  • by mav[LAG] ( 31387 ) on Friday December 29, 2000 @03:21AM (#1414128)
    Pity - because it's well written, technically sound and has the kind of insight you only get from years of progamming in assembly language on different generations of processors. It's not all anti-Intel. He also gives credit where credit is due - the design of the 386 and 486 chips for instance.

    A good editor would have removed the BOYCOTT ALL INTEL stuff or at least moved it down a bit. But I feel for the author here: he paid $4000 for a system which isn't as good as a (much) cheaper Athlon.

    Crusoe watchers take note: there's a nice little summary of the Crusoe's performance and why he's very impressed with that CPU's architecture. That summary alone is worth reading.

  • I'm stuck in an endless loop trying to parse that last sentence. I think it makes sense though. Now if only I can break out of it by lunchtime...
  • We can look at this with SPEC2000.

    1.5 GHz Pentium 4:

    SpecINT2000: 536
    SpecINT2000: 558
    System price: $2,000

    833 MHz Alpha 21264:

    SpecINT2000: 544
    SpecFP2000: 658
    System price: $8,000 (???)

    1,2 GHz Athlon:

    SpecFP2000: 350
    SpecINT2000: 458
    System price: $1,500

    So what is the FACTUAL basis that The Pentium 4 is slow and/or overpriced?
  • The truism that NYT is the standard bearer for print media still holds, I believe, so consider this from the article linked in the blurb:

    Clearly, the Pentium 4 is all about the future. For example, the chip can understand 144 new audiovisual software instructions -- in fact, it can process several of them in a single gulp.
    Unfortunately, that powerful acceleration technique will lie untapped until Windows programs are rewritten to take advantage of it. [emphasis mine]

    Case in point that the open-source movement hasn't gone far enough in educating the reporters. Sure, blather technobabble all you want at them, and they'll glaze as surely as I have today here at work. But to get them to preach your stuff, you've got to make them understand that Windows isn't the only solution out there.


    --
  • Go back and actually read the article. Intel cut parts of the silicon that would actually improve performance (second FPU), however left in silicon that is never used (extra double speed ALU) because of bottlenecks caused by cuts earlier in the pipeline (single instruction decoder and low throughput trace cache). If you can get past the author's bone against Intel on the first half of the article, he does make very valid points.
  • What do you need a 2GHz or a 2.5GHz processor for?

    It is needed for work and for play:
    At work:

    • To route PCBs, simulate heat dissipation and other modeling applications (not only nuclear explosions)
    • To run 3D CADs (ProEngineer), real-time photorealistic renderers (Alias)
    • To process images in publishing industry, sounds and videos in entertainment
    • To run SSL-enabled Web servers
    At home:
    • Play highly compressed audio and video streams
    • Encode video and audio
    • Play games
    There are more uses for high performance, but that gives the idea.
  • Excuse me? What is algorithmically difficult about a shift? A three-year-old can design a variable shift that takes one cycle. It takes a bit of silicon, but it really pays off since you need shifts all the time (math, address calculation, ...)

    If it is so easy, show me a variable shift which takes less than O(n log n) transistors and O(log n) stages. I sure can't work out how to do it.
  • Look how much the gap is closing...
    I use to remember times where Alpha was more than 3 times faster than anything Intel would build.
    They also had the highest clock speed (150Mhz vs. 66Mhz for Intel).

    But the situation has changed!
    The following figures are Spec95_int and Spec95_fp
    AMD Althon/650Mhz ---> 29.4 - 22.4
    Alpha 21264/667Mhz ----> 32.1 - 49.0
    So at the same clock speed, Athlon is as fast as Alpha whereas it is half the speed for floating operations.
    But this is at the same speed. Athlon reaches way higher frequencies so the gap is very small in fp and Alpha is beaten in int. And the price difference is massive.
    I have got an Alpha server, a Dual ultra-sparc workstation and a whole bunch of PCs, believe me, the speed is about the same for 1/10th of the price.

  • yeah, I would say so too, especially considering the P4 isnt THAT bad of a chip. It may not have the FPU muscle of the tbird, but if a program is optimized correctly for SSE2, then it's actually a quite powerful chip. I think intel's biggest mistake recently was adopting rambus, which is what caused a lot of the mess with various chipsets for the CuMine. I wouldn't say that the P4 is the best chip ever, because it has it'd problems, but it's an ok design, and by Q2 2001, we should see some nice ~2 Ghz chips rolling out, while AMD is stuck ~1.5 most likely. I don't think the P4's are worth the money at the moment, but they should be an ok buy around august 2001. I for one, am going to buy a dual tbird 1.2 ghz asap....
  • In my mind, this is indeed a very significant flaw of the P4 that this article overlooks. After running a dual Intel Celeron SMP box for several years now, I'm not really excited about upgrading to an expensive uniprocessor P4. However, if AMD releases their SMP chipset, I would be very excited to upgrade to dual Athlons.
  • Hey!! M$ Does suck.
    It won't support my hardware. Thats why it sucks.
    It won't support my CPU either. Dunno why M$ doesn't support Sparc. But THEY SUCK. The wintel P4 Sucks too.

  • Writing optimizing compilers is a very hard task, and almost all code is still compiled with compilers optimizing for the 486 (gcc anyone?).

    Which is exactly why Intel wrote their own optimizing compiler. They're even writing a Linux version, which is supposed to be undergoing a public beta test in January.
  • well that was a very very good read. Take it with a very very small ammount of salt. I mean he did forget to mention a huge problem [intel.com] with the P4.

    I mean how bad are things getting at Intel ? ? ?
  • by Yu Suzuki ( 170586 ) on Friday December 29, 2000 @02:30AM (#1414179) Homepage
    I think you must have dozed off during Econ 101... under basic capitalistic theory, competition improves quality and/or reduces price because two companies are competing for the same dollar. For example, if Sega wants to you buy their nintendo system (and not Sony's), they'll try to make their system more attractive -- perhaps by offering better games, or selling it for less. Sony, of course, will try to get you to buy their nintendo (and not Sega's) by doing the same thing. The result? You're offered a better selection because both companies are now putting out improved products.

    Buying products from everyone doesn't accomplish this. If you buy both Sega's and Sony's systems, both companies get what they want: your money. Sega has no reason to improve its products because you already bought one; and Sony has also no reason to improve because you also bought one of theirs as well. If consumers don't discriminate between quality and non-quality goods (or cheap and non-cheap goods), then no competitive situation exists.

    So if you really want to see forward progress, don't support both. Support whichever one is putting out the product you believe is most worthy of success. If you like Sega's system better, buy it; now you're giving Sony an incentive to make its system more attractive to you by being more like Sega -- which is good for you! And if you like Sony's better, buy it and give Sega to do business like Sony.

    Of course, competition also requires consumers not to be very brand loyal. A lot of die-hard Linux or Windows users would be reluctant to switch operating systems even if they'd be happier with the other one. So, there's no harm in changing your "loyalty" and finding a new "adversary" (as you put it) to go up against. In an ideal world, people wouldn't have any consumer loyalty at all -- they'd always vote with their money and buy whatever product is the best product available.

    Yu Suzuki

  • by Fervent ( 178271 ) on Thursday December 28, 2000 @11:17PM (#1414186)
    Out of curiousity, does anybody else notice the underlying psychology of this, and many other news posts on Slashdot? I'll put it simply: the tech community looks like it's always is looking for someone to blame.

    No the PIV is not a great chip. Hell, it's not even a good chip. But once AMD got onto the scene, it looked like we were itching and scratching to find a way to go against the "bigger company" (Intel, Microsoft, and now RedHat notwithstanding). In 6 months, we'll have a whole new "adversary" to rile up the tech community.

    Enough is enough. Yes, the PIV has flaws. Every chip has flaws. You pay extra to get just a smidgen more performance, but that's why AMD is referred to as the "price/performance leader".

    However, if we don't root for Intel, and AMD suddenly takes over, who won't put their money down that we will go against AMD? I say support both (I use the same mentality in buying a Sega Dreamcast/PS2; boxed distros of Linux and Windows 2000). Without competition on both sides, even "the Man's", there will be no forward progress.

  • Which article were you reading? There were statistics (including cycle counts), comparisons of compiled code, and in-depth reasons for the points that were made. I am not a processor guru and so I'm not sure if they were all good reasons, but there was a large amount of technical backup for the claims that were made. Did you not read past the first section (anti-Intel invective) or the second section (a brief history of PC microprocessors)?

    True, the anti-Intel bias was a little disconcerting, but that's because I think you should separate out the technical arguments from the name-calling, and consolidate all of the "boycott Intel" and "Intel engineers are idiots" at the end. Others feel differently, apparently :)

  • first, you're comparing a 1.5 ghz Pentium 4 with rambus ram against a 1.2 ghz athlon thunderbird with sdr sdram when most 1.2 ghz athlons would probably be paired with ddr sdram. so the comparison looks more like:

    cpu: specint specfp
    amd 1.2 ghz ddr 496 420
    amd 1.2 ghz sdr 458 350
    intl 1.4 ghz 536 558
    also, did you notice that the pentium 4 machine had a top of the line hard drive (ibm deskstar 75gxp) and video card (geforce2gts) whereas the amd machines used an older ibm hard drive and a diamond stealth 3d pci(WTF?!!?) on the ddr machine and a western digital hd + nvidia tnt2 m64 on the sdr machine? or how about the fact that all the tests were done with an intel compiler????

    Then there's the system prices, I have no idea where you got these prices, but assuming all 3 systems use the same components except cpu+mb+ram, the prices would probably look like:

    amd 1.2 ghz cpu: $300?
    intel 1.4 ghz cpu w/128 rdram bundle: $1165 sdr ram, 128 MB: $56
    ddr ram, 128 MB: $200?
    asus p4 motherboard: $302
    asus amd sdr motherboard: $140
    amd ddr motherboard: $200?
    (all prices taken from mwave.com, ddr prices estimated)

    so, putting the cpu+mb+ram together, the costs are:
    amd sdr: $496
    amd ddr: $700
    intel p4: $1467
    so based on these figures, the p4 is OVERPRICED!


    Zetetic
    Seeking; proceeding by inquiry.

    Elench
    A specious but fallacious argument; a sophism.
  • ... A lower cost and slower variant, the 8088, was also used in some early PCs, providing only an 8-bit bus externally to limit the number of pins on the chip.

    If I'm not mistaken, the 8086 and 8088 were both manufactured in 40 pin ceramic (and later plastic) DIP packages. There was no reduction in pin count but rather in internal drivers.

  • First, I'd be extremely interested to see what this guys credentials are - its interesting to see him take the entire Intel CPU design team to task over this.

    He's not taking the Intel engineers to task. He's taking the Intel marketing people to task. What, you don't think the Intel engineers didn't want a larger L1 cache, more execution units, etc? Of course they did. But more silicon=higher costs so you can bet that it's the marketing guys who lopped off all that extra silicon.

    I would bet you that the actual Intel engineers who designed the chip would probably agree with most of this guy's points!


    http://www.bootyproject.org [bootyproject.org]
  • Hmm, after reading the opening paragraphs, full of over-the-top language, including the demand to BOYCOTT ALL INTEL PRODUCTS (caps used by the original author...), I kinda lost interest.

    Although I'm sure the author knows a lot about processors, he is so obviously biased against Intel (and towards AMD) that getting any information from this article is like learning about Linux from Microsoft

    What this guy needs is a good editor, and perhaps a few chill pills...

  • by Anonymous Coward
    You don't have to know his credentials, because he includes specific examples of common code that executes slower on the P4, and then describes the architectural features that lead to it. He backs up pretty much every claim he makes, so you are free to draw your own conclusions of the veracity of his assertions.

    Or would you rather just accept (with no evidence) an Intel engineer telling you "the p4 rocks, buy one today?"

"If I do not want others to quote me, I do not speak." -- Phil Wayne

Working...