Become a fan of Slashdot on Facebook


Forgot your password?

Can SSE-2 Save the Pentium 4? 171

Siloh writes "Ace's hardware has posted a Floating-Point Compiler Performance Analysis which, in a nutshell, tests Intel's most important claim about the Pentium 4. "It does not reach its full potential with today's software, but with future software (including SSE-2 optimizations) it will outclass the competition". They test with Floating point benchmarks which have been recompiled on the latest Intel and MS compilers." Basically, another iteration of the question: Can the P4 dethrone the Athlon?
This discussion has been archived. No new comments can be posted.

Can SSE-2 Save the Pentium 4?

Comments Filter:
  • This reminds of some comments made by the Great Carmack. Games written with SSE or 3dNow optimizations didn't really benefit from the extra code. What made games go faster is having those optimizations built into the video card's drivers.
  • by Anonymous Coward
    * Of people making line graph instead of bar graph
    * Of people that don't understand most of what they write, so they put all the data, instead of focusing on the important one
    * Of over-verbose hardware sites that make you scan 5 pages before getting on the (rotten) beef
    * Of clueless people that pretend being surprised when optimizing compilers on very specific code can get a 240% speedup.

    Btw, this sort of shit reminds me of someone:

    "Paul Hsieh, our local assembler guru, analyzed the assembler output of the SSE-2 optimized version of Flops. He pointed out that "some of the loops are not fully vectorized, only the lower half of the XMM octaword is being used." In other words, SSE-2 instructions which normally operate on two double precision floating point numbers are replacing the "normal" x87 instructions and are only working on one floating point number at time."

    Anyone think that "Paul Hsieh" == "Bob Ababooey" ?


  • by Anonymous Coward
    It makes no sense to look at flops/GHz. The P4 design, via its longer pipelines, *intentionally* sacrifices flops/GHz so that the chip can run at a higher clock rate.

    The only sensible metric is performace at the available clock speed, which for P4 is higher than for Athlon.

    If I had a CPU that achieved 5000 flops/GHz but only ran at 1 MHz, would you want it, or would you want the 1.5 GHz P4?
  • by Anonymous Coward on Friday June 29, 2001 @06:49AM (#120145)
    SSE-2 will be nice, but the problem with Intel is that they have fallen behind AMD in the CPU wars. Their stock price [] is only one of many indicators that they have made several bad business decisions in the past few years, and those decisions continue to haunt them and give AMD a leg up on the market. Consider:
    • The RAMBUS mess. They tried to leverage their chip/chipset monopoly to control the RAM market through large investments and contracts with RAMBUS. Now RAMBUS is on the brink of death and Intel has lost.
    • The IA-64 disaster. It's hard to launch a new architecture, and even harder when you keep prices high and don't put enough chips in the hands of developers.
    • The uniprocessor-only P4. Intel spent years perfecting SMP on their earlier processors, and for what? So that AMD could beat them to the punch, running a 1.4Ghz CPU in SMP mode. Intel also embraced the slower-but-cheaper shared memory bus architecture, which is going to kill SMP performance in comparison.
    • Unwise investments. Intel has invested in several dot-coms that are dying or dead already. Intel Capital hasn't been profitable since FY 1999 because they have sunk billions into companies like VA that could never hope to turn a profit.
    Intel still has potential but they will need to get their act together if they want to start competing with AMD again.

    -A former Intel employee

  • by Anonymous Coward on Friday June 29, 2001 @07:13AM (#120146)
    Look at the final results:

    bestover2.gif []

    Now look at the place where the P4 shows the most improvement over the Athlon: the first data point, Flops 8, with the P4 using the Intel compiler and the Athlon using Microsoft's.

    From the graph, the Pentium 4 clocks in at about 1140 flops while the Athlon gets only 900 flops.

    But wait! We're forgetting something. You're running the Pentium 4 at a faster clock speed! For the love of crumbcake, normalize those values for clock speed, please!

    Pentium 4: 1140 flops / 1.5 GHz = 760 flops/GHz
    Athlon: 900 flops / 1.2 GHz = 750 flops/GHz

    Now things are a bit more fair. Yes, with the absolute latest compiler from the maker of the processor, the Pentium 4 beats the Athlon in one of eight tests by a measly ten flops per gigahertz. With the latest compiler from some big software company, the Athlon beats the Pentium 4 in the other seven categories, hands down.

    Don't believe everything you read.
  • the p4 ddr chipset (i845) doesn't perform anywhere near as well as their rambus ones. also the ddr mode in that chip won't be working for several months. word on the street is that via's upcoming p4 ddr chipset is a pretty good performer, but nothing much has been published on that, via doesn't have a clear license on the bus, and via chipsets are often buggy. so really, they won't be having ddr chipsets any time soon, and chances are that the chipsets will be terrible performers. amd has to watch out more for a drop in rdram prices more than anything else.
  • Europeans don't build smaller-engine cars to be more energy efficient, they build them because many EU countries tax engines by the volume they take up. So they make smaller engines but with higher compression ratios, so they end up being about the same in efficiency.
  • True the Athlon is faster on a per clock basis, but it's a fair comparison to compare the fastest Athlon vs. the fastest P4 since they're both obtainable (although actually the fastest is 1.4 Athlon, 1.7 P4, so we're comparing 1 speed grade down or so).
  • and SMT can take full advantage of the smaller RAMBUS latencies

    huh? Rambus has much higher latencies than SDRAM. That is why P3 with PC133 SDRAM outperforms the same P3 with Rambus on most benchmars. Since you got this part wrong, I take it the rest of your post should be taken with a grain of salt as well.

  • The only thing that separates the Itanium from the rest of the pack is it's FP performance. If the P4 gets better FP performance it'll show the results of the multi-year Merced project for the dog it really is.

    The 800Mhz Itanium has the same SpecInt performance as a 800Mhz PIII... if the 1.7Ghz P4 got only 20% faster SpecFPU performance it would match the Itanium in SpecFP performance to go with it's already 50% better SpecInt performance.

    Yeah I know the Itanium is only at 800Mhz but Intel needs to keep cranking out P4s to fend off the Anthlon - they can't afford NOT to release new chips even if the 2Ghz P4 shames their new "top-o-the-line" server chip.

    Sure the Merced has a better box around it and huge amounts of onboard cache, but given the same surroundings the P4 would make their VERY expensive "server" chip look pretty bad...

  • by washort ( 6555 ) on Friday June 29, 2001 @06:53AM (#120152) Homepage
    Why wouldn't Intel be doing stuff like putting SSE-2 optimisation code into gcc so that all us hacker-types would have a _reason_ to pick the P4 over the Athlon? I know they have their own compiler but to the best of my knowledge it's not free (or at least it's not in Debian... ;-)
    Just seems odd that they'd pass up the opportunity for something like that. *shrug*

  • Yes, odds are it will be an Intel, but AMD is looking like it's going to have 30% marketshare this year.

    And the difference in processors doesn't change what you can do with the computer (while things like changing OS does). The better analogy here is Dell beating Compaq which beat IBM.

    Even the suits listen when you say "This runs everything the Intel does, as well as the Intel does, for less" enough times
    Steven E. Ehrbar
  • by JoeBuck ( 7947 ) on Friday June 29, 2001 @08:01AM (#120154) Homepage

    It is ignorant to argue that you should normalize for clock speed. The Pentium 4's deep pipelines are present precisely so that the chip can be run at a faster clock speed than otherwise.

    With the exact same technology, same fabs, you can't make the Athlon run at the same clock speed as the Pentium 4.

  • Wow. This is completely unlike Java. Microsoft is really innovating here. Just think how fast interpreted code could run if you optimize the interpreter. I wonder why Sun hasn't thought of that? I'm going to send them an e-mail right now with my suggestion.
  • Ah, for fuck's sake. My article wasn't a troll. It was either sarchasm, or if your sarchasm detector was broken, I suppose it could pass for a flame.

    But a troll? Come on. (eyes roll)
  • by samael ( 12612 ) <> on Friday June 29, 2001 @07:03AM (#120157) Homepage
    It occured to me a while back that .NET while affect this immensely.
    Consider, .NET compilers compile to an intermediate code level that isn't actually transformed into machine code until they are run for the first time on the target machine.
    This means that all you have to do to get the most out of your machine is make sure you have the .NET IL->machine code compiler for your specific CPU and all .NET code will be totally optimised for _your_ CPU.

    Of course, this also means that you don't need to recompile to work on any CPU that has the CLR available on it, which makes transferring to IA64 (or any other architecture) a lot easier.
  • Remember when software was labeled "requires IBM or 100% compatible PC"?

    Just in the main stream, how many variations are we now or soon facing?

    Pentium w/ MMX is the lowest common denominator...
    Intel's SSE instructions
    AMD's 3D-NOW!
    Aren't there separate instructions in the Athlon, like 3D-NOW2, or something?
    Now we're heading towards two different x86 64 bit implentations (yes, IA-64 isn't actually x86 anymore, but since they're bolting an x86 processor onto the silicon as well, it may as well be counted as one)...

    Either developers will continue as they've been doing, writing software for the lowest commmon denominator, which makes all of intel's and AMD's attempts to add features to their processors useless efforts, which ulitimately just cost us more money since they can't manufacture as many chips per wafer, or else we're going to start seeing "Windows/Pentium 4", "Windows/AMD", "Windows/64-bit AMD" and "Windows/Itanium" sections in compUSA and such....

    ANd before the oblicatory comment arrives, i'll state that no, i really would not like to compile my own software, which would be possible if everything ni the computing world was open source/GPLed/etc...
  • One thing I don't see mentioned here is what degree of precision that SSE-2 has. I'm guessing that it only works on 32-bit floats.

    The SSE instructions on the P-III operate on 32-bit float, while the x87 FPU instructions work on 80 bit floats ( You can load 32-bit, 64-bit and 80-bit floats into the FPU registers and they are all expanded to 80-bits). Intermediate FPU results are computed/stored with 80-bit values. For SSE I believe (I could be wrong) that everything is 32-bit internally and register wise.

    For scientific and engineering, 32-bits of floating point (7-8 digits of precision) just doesn't cut it. Most people I know doing that kind of work on a PC (well, both of them) use the FPU but not SSE for that reason. They have apps that take days to perform a single calculation - lots of time for accumulated precision errors to become a factor.

    32-bit floats are currently enough for most 3D-graphics work (at PC resolutions), and those games ^h^h^h^h^h apps are probably a bigger consideration in driving mainstream CPU development. Given that the SSE/2 instructions have multiple math units to perform ops in parallel, there has to be a big transistor savings to have less precision.

    I would bet that the FPU floating point precision on those Sun, Irix, and Alpha boxes is higher than 32-bits.

  • 64-bits, Cool. Hey, I said it was guess. :-)

    For 3d apps that's an interesting trade off: More precision at 2 data items or more throughput at 4 data items.

    That still doesn't invalidate the point about precision for scientific and engineering applications, and understanding that it may be a factor in deciding what systems to run said apps on.

  • Actually, yes, I have. At my current place of employment, we use four 650 Quad Xeon with 2 Gig of RAM a piece, each with an Adaptec RAID controller on it with 128MB of memory. They grind to a halt, being barely usable, but probably a lot like your situation, that is what we have to use. Another division has 1 Sun Enterprise server doing the equivalent and the thing doesn't break a sweat. Sounds like you, along with some of what we do, are using the wrong hardware for the job. Why use x86 when there is much faster hardware out there for vector crunching?

    Bryan R.
  • by BRock97 ( 17460 ) on Friday June 29, 2001 @06:45AM (#120162) Homepage
    Why bother? Every iteration of processors that comes out has some special optimization that is required to run at peak performance. If you use one or the other, it gets you a marginal performance boost. Sure the P4 can do magic if you turn on this compile flag, and then disable this other. Who cares? Things are fast enough now that price should be considered the king. Why spend $100 - $200 more for a processor when all it gets you is a few more frames at 1600x1200 in Quake3. Until the P4 comes down in price (and they are making big inroads for this), the Athlon will be king.

    Bryan R.
  • by EvilJohn ( 17821 ) on Friday June 29, 2001 @06:47AM (#120163) Homepage
    The answer is yes, with SSE-2, it will beat the athlon into the ground. Check out for more details.

    The real question is the Short lifespan on this P4. With Intel going to DDR (thank god) but changing socket types, how viable is a P4 at this point?

    Even gamers think about TCO.

    // EvilJohn
    // Java Geek
  • Take a second processor, with more pipelines available for instruction issue. Since it has more pipelines available it is able to issue more instructions while waiting for the branch to the calculated.

    He was referring to pipeline length, not width. In a 20 stage processor at the same clock rate, it takes longer to fill a pipeline and consequently the branch misprediction penalty is worse.

    Suppose you have two processors, each at the same clock speed. One has a 5-stage pipeline, the second a 20-stage pipeline. Suppose that there is a branch every 6 instructions (which is typical). For every mispredicted branch, the first processor need only throw away 4 instructions, but the second 19. If most branches were mispredicted, it would kill the second processor.

    Pipeline length and clock speed are closely related design parameters. Longer pipes allow faster clock rates (because less is done per cycle per stage), but they increase the branch misprediction penalty. Generally there is a "happy compromise" for a processor, between pipeline length and clock speed. Most recent chips have found that happy medium to be around 10 stages. The Pentium-4 is unusual in the regard that it has 20 stages. Branch prediction therefore becomes extremely important.

    Long pipelines tend to benefit Floating Point code more than Integer code, because FP is more loop-intensive, and the branches are therefore more easily predicted. This is why the P4, with its extremely long pipelines, performes poorly on integer performance compared to the PIII, but well on FP.
  • how is this flamebait? you cannot seriously claim that AMD has overtaken Intel in the average consumer's mind.
  • Why does this matter so much if you're happily running Win2K?
  • The Intel Linux compiler will be optimized for the P4 and so there will be at least one compiler up to the job. It will cost money, but if you are really after top performance, you will probably not let a few hundred dollars stand in the way. It appears Intel is trying to make it compatable with gcc (and eventually g++), so ultimately (though not with the beta) you can link in your high performance modules with the vast array of existing libraries that have been compiled with gcc.

    The interesting thing will be to see how well gcc becomes optimized for the Itanium processor, since Intel's long term plans are really to push this as the future workhorse of high performance computing. Since gcc must start over from scratch with this architecture anyway, maybe it will start out more optimized than gcc for x86, which has had to work with everything from the 386 to the P4.
  • by GroundBounce ( 20126 ) on Friday June 29, 2001 @09:31AM (#120168)
    Of course GCC is available for win2k; however it is very seldom (if ever) used for serious commerical applications. Yes, it is used for porting UNIX applications, and these applications tend to run more slowly than their native windows counterparts. I have nothing against Win2k, and I use Win2K as well as Linux and HP-UX, it's just that the performance of GCC on win32 should be relatively irrelevent to someone who uses Win2k exclusively (and his is sig implies that he uses Windows exclusively), except in the rare circumstance that they are porting a UNIX/Linux app, or are using GCC because it's free in which case they are probably not developing an ultra high performance application.

    On the other hand, GCC *does* matter for Linux. It is true that most apps run just fine on Linux compiled with GCC. But clearly newer x86 processors are becoming more specialized and there are applications where every drop of performance counts. I do large circuit simulations, and a 10% improvement could mean getting results hours sooner. For Linux to compete seriously in these areas the apps will have to be compiled with a compiler who's results can compete with what's available under win32.
  • Actually, it's entirely unlike JVMs on the market today, because .NET does not include an interpreter. It always compiles the code natively before running. It's more like a TowerJ or JOVE Java environment.
  • Actually, you could spell it as "Athlon" rather than "Athalon", and you would be much more credible.

  • While I'll agree with you that price does make a big difference, don't forget that branding is important too. Intel, with the Pentium (tm), has one of the strongest brands out there, probably on par with big names like Coca Cola. That is one of the reasons that Intel continues to have a big market share even though Althons have been higher performance + lower cost.

    So, in a sense, Marketing is King.
  • Not really.

    Many people (myself included) use cheap pcs to do number crunching for scientific porposes.

    Normaly I use the low end machines, like my home PC (linux duron 900), to develop and test the code I will put to run on alphas.

    I haven't made any calculations, but i suppose that for poor labs with many sudents, the cost of an alpha (for example) could finance >2 "lower end" systems which are also cheaper & easier to maintain and upgrade.

  • Sometimes speed is still king. I recently bought a computer for running floating point intensive simulations. A large part of the cost in my research isn't the expense of the hardware but the expense of my time. So I got the fastest system I could put together. I wanted dual processors and preferred a Dell machine so I was already stuck with Intel CPU's. The only question was whether to go with P-III's or spend $1,000 more for dual P4's. All of my searching on the Web showed that P4's are no better than P-III's for floating point calculations, so I went with the dual P-III system. Intel would now be $1,000 richer if I were aware that the P4 really could perform much faster.

    By the way, I run Linux and compile with g++. Does anybody know if the GNU compiler does a good job of processor-specific optimizations?

    There are more uses for computers than playing games and reading Slashdot. ;-)


  • IMO gcc's optimization is generally weak. gcc doesn't have any MMX/SSE/SSE2 support, and even without considering vertorization it produces code that's around 20% slower than the Intel compiler.

    gcc 3.0 apparently has an entirely new x86 back end, but from comments I've heard it produces code that's around 5% SLOWER than the old back end... It'd be nice to see some comprehensive benchmarks of gcc 2.95 vs 3.0 though.

    There's a very interesting open source SIMD compiler project (mainly focusing on MMX) at Purdue university: []
  • Did you check what's been in all the high street GHz+ computers for the last year? Maybe P4 is makign a showing now (at least it's made it to the TV shopping channels), but for a least a year you couldn't even find a high end Intel PC retail - because they didn't have a GHz processor that worked (remember the Intel 1GHz - recalled after about 2 weeks).

    AMD is also kicking Intel's ass in Europe, and are expected to continue gaining worldwide market share (from current 20%+ to close to 30% by end of year.

    Most consumers don't know enough to make a technical decision anyway - they're going to buy what's cheapest or what their college student geek son/daughter advises.
  • Intel may have good compilers, but they don't give 'em away

    Well, they should, and they should open-source them as well. Intel is primarily in the business of selling processors, not compilers, so getting their P4 performance optimizations into as many third-party compilers should be their top priority.

    Better general compiler support for the P4 would be an effective way to compensate for its hardware inferiority to the Athlon.
  • the Pentium 4 clocks in at about 1140 flops

    Wow, 1140 flops. With some tight code, my VIC-20 would be competitive with this!
  • It is ignorant to argue that you should normalize for clock speed.

    A better way to normalize would be bang/buck.
  • You forget, however, that Intel's compiler does not support many GCC extentions, specifically the all-important inline asm extension. Without this, the compiler has no chance of compiling the kernel. Not to mention that only GCC is supported.

  • (Yeah, yeah, I know you meant "iteration." But any computer geek who doesn't know how that term is spelled deserves some ribbing.)

    Spelt. The word you want is spelt


    spelt is the past participle
    spelled is the past tense.

    or at least it was when I did my O-level.

  • We just bought 12 1.4 GHz Athlon machines with
    1.5 GB RAM each for $10k for neural computations.

    We could have gone with Sun, Irix, or Alpha if
    we wanted one machine with 2-4 processors.

    I looked into it. It wasn't going to happen.
  • A stock OpenBSD installation is compiled for 386. Did you recompile the kernel and the whole source code with pentium3 optimizations ?

    -- Pure FTP server [] - Upgrade your FTP server to something simple and secure.
  • One thing I don't see mentioned here is what degree of precision that SSE-2 has. I'm guessing that it only works on 32-bit floats.

    You guessed wrong. SSE2 can operate on 2 64 bit floats in parallel.
  • It seems Intel is on the right path to giving the Athlon a run for it's money... I'm vaguely reminded of how quickly many companies/software developers/etc. picked up support for 3dNow! (likely due to the large number of customers and potential customers with AMD K6-2/K6-3 chips).

    AMD had [] a fairly large number of developers promising 3dNow! support, and seemed to be doing the "right thing" by helping developers [] optimize their code.

    It seems Intel has picked up on this, and has made it easy to optimize for SSE-2 with their own compiler plugin for VC. I'm just curious if this breaks AMD optimizations.

    This is definitely a move in the right direction for Intel, though. I don't necessarily like it though, because I'm an avid AMD fan. :D
  • As of yet, Intel's compiler is the only optimizing game in town. Even AMD uses Intel's compiler when giving Athlon benchmarks.
  • More meaningless blathering about meaningless numbers. This article wasn't TRYING to measure real world performance! Why do you think they used a benchmark that fit entirely in L1 cache? They were simply trying to measure the peak throughput of the floating point units on the Athlon and the P4.
  • Is it just me, or is everyone talking about which compiler can vectorize code better for cutting edge architectures, while GCC is still trying to get good P6 optimizations? Seriously, though, does anyone know if GCC 3.0 is in any way competitive with the new MS and Intel compilers?
  • Actually, I multi-boot Win2K and Linux. I've been using Linux since Slack 3.5. Should teach you something about looking at the .sig (or the screename!) rather than the post. As for why I care, I was just curious. I do lots of graphics type applications and a good compiler can really speed up matrix processing (which lends itself to pipelining quite well).
  • by be-fan ( 61476 ) on Friday June 29, 2001 @07:35AM (#120190)
    I may sound like a troll of sorts or anti Intel, but when it comes to high end scientific engineering does anyone actually use anything outside the realms of Sun, Irix, and Alpha? Although benchmarks claim to show factual information, I've always seen them as a bit biased.
    Not everyone working on a scientific application is blessed to be in a huge project with infintely deep pockets. There are tons of college students/projects doing different types of scientific computing, and x86 provides a very good price/performance ratio for these users.
  • by be-fan ( 61476 ) on Friday June 29, 2001 @07:31AM (#120191)
    However, Intel's C compiler is in Beta for Linux. Thus, apps that need vectorizing could simply pony up $500 for a license and compile with that.
  • by barracg8 ( 61682 ) on Friday June 29, 2001 @08:02AM (#120192)
    • As a result Pentium III would out-perform Pentium 4 in some occasion, as the latter tends to lose more instructions when branch-misprediction rate is too high.
    Your reasons for the P3 outperformning the P4 don't seem to make a lot of sense.

    Take a processor. It hits a branch instruction. While it is working out whether or not to take the branch, it keeps itself busy by executing instructions from one side or other of the branch. It gets it wrong, so when it realizes this, it throws away a bunch of work it has done. Hence branch misprediction it a Bad thing.

    Take a second processor, with more pipelines available for instruction issue. Again makes a branch prediction. Since it has more pipelines available it is able to issue more instructions while waiting for the branch to the calculated. Again it gets it wrong, and since it has been able to issue more instructions from after the branch, more are thrown away when it realizes a misprediction has taken place.

    The point it that while more instructions are thrown away, this is only because more have been issued, and therefore the fact that you have more pipelines in a new generation does not lead to that processor running slower than previous versions. The increased branch misprediction penalty can only diminish the amount of increased performance that the extra pipelines give you, and not lead to an overall speed decrease, right?


  • by joq ( 63625 ) on Friday June 29, 2001 @06:47AM (#120193) Homepage Journal
    They also represent the majority of FPU applications. Most applications contain very few FDIV, but some scientific and engineering applications do.

    I may sound like a troll of sorts or anti Intel, but when it comes to high end scientific engineering does anyone actually use anything outside the realms of Sun, Irix, and Alpha? Although benchmarks claim to show factual information, I've always seen them as a bit biased.

    Typical PIV purchaser in my eyes: Gamer, Newbie buying preconfiged pc's. What about this end user where are the stats for the typical purchaser? Sometimes these benchmarks confuse the average person into thinking the PIV is lowly in comparison to others.

    In this article we will try to answer the following three questions:
    • How well will the Pentium 4 and Athlon perform with software that is compiled with newer compilers (MS Visual 7.0 and Intel C 5.0.1)?
      2.Can better compilers automatically create SSE-2 optimized code from simple C++ code?
      3.Can Pentium 4 aware compilers boost the Pentium 4's floating-point performance past the strong FPU of the Athlon?

    Again I may be off my rocker here, but most developers I've met have always customized their own machines, dual processors, other architectures, so again is it completely unbiased to say the PIV lacks? Förvirring om denna skit =\
  • Is it just us game programmers using SSE2 or are other apps using it?

    *gets a feeling of PPro all over again*
  • This is an interesting take, but there are other considerations.. First, if you read the Findings of Fact against MS, you can easily be led to believe that MS's real purpose was to lock java to the MS platform by polluting it was ms-only extensions that proliferate on the net. This practice showed up too often in other areas to ignore its likelihood.

    Next, why would MS want a write-once run anywhere development environment for themselves. They're not about to build their drivers and win32 API in Java, and any apps that they build on top of them is pure C++, so all it would take is a simple recompile for the different platforms.

    When Java came out, I don't believe that Alpha-NT was that popular, and SGI-NT was being dropped (not certain about the timing, but it seems about right).

    I agree with you about win 9x being stepping stones, but I don't think cross-platform was a big focus for NT. Yeah they have the Hardware abstraction layer, but I don't know that this wasn't more for stability and protected code than for true platform independance. Thought it was really just a carry-over from VMS.

  • yes and no.. how do you categorize a new chip? The AMD K5 / K6 were roughly inline with the P5, but they were separately designed.. It of course comes down to marketing. BUT, what you can look at is the generation of the design. Pentium introduced (for x86's anyway) relatively deep pipes and multi-instructions. The next generation was OOE. You may or may not be able to categorize the Pentium4 as a new generation based on it's double-pumped integer. I think that all the other aspects of the P4 are simply augmentations or incorporations of nifty ideas (like caching the decoded ops, which I believe AMD did a while ago).

    I use to call the IA-64 the P7 just so that may lay-friends could know what I was talking about. It's VLIW / speculative execution could probably be considered a new generation. But in reality it's a completely separate product with hardly any ability to compare to the x86 line.

    I think, however, that I'd recognize SMT / CMP as a next generation label.

  • Not sure that I'm reading you correcty. My initial impression is that SMT / CMP would hurt cache hits. If you had an app that was single threaded, then obviously SMT won't help you, but CMP would have you compete for cache space. If you had a multi-threaded app, then yes their code-cache would most likely have less thrashing, but their data stands a good chance of competing for the same space.. In single threaded operation, your cache can afford to risk having 2, 4, 16, etc memory locations overlap on a cache line, since it's not too statistically likely that you'll thrash.. But if you have SMT, then various types of applications that require large data-sets (such as text-processing web servers), enhance the chances for accessing conflicting memory regions. Even with more expensive cache architectures, the likely-hood of cache-conflicts is still higher with SMT.

    My understanding of the proposed SMT on x86 is that you simply switch to another thread when there's a memory stall. I think SPARCs have done that for a while... What I believe you're referring to is the reduction in the number of times you have to context switch and thereby flush your cache. Though it's true that having fewer distinct processes (even LWP ones) requires fewer context switches, I believe that you are not given a time-delta extension simply because you have 2 or more threads associated with a process for an SMT core. Thus, I believe the time-delta is still the same for all processes (minus HW interrupts), and the number of cache flushes per second is the same. Hence, little realized benifit.

    Just for completeness, what I think you do get is fewer memory stalls within your time-delta. Additionally, if each thread is stalling, then you at least have multiple concurrent memory requests, which I believe does suite RDRAM well. You could achieve a similar situation by having multiple independant banks of SDRAM (like nVida's GeForce 3).

    In summary, if anything, cache is the weak link towards multi-core / multi-threading.

  • On a side note, I always thought that the P-III only had 3 pipelines, one of which could execute any micro-op, and the other two of which could only execute the simpler micro-ops.

    Believe you're thinking of the number it can "issue", which is separate than the number of [semi-]independent pipes. In the PPro, some instructions (like divide) would lock other pipes or stages within it's own pipe. Issuing instructions is expensive, so it's generally accepted that you issue less than the number of pipes, but as the P4/Athlon have significantly more pipes than their predecessors, they have augmented the number of issued instructinos by 1 or so.

  • I disagree. This is definately the case with a 486 to Athlon comparison, but we're already taking into account the architectural differences (stages / pipe, etc). Part of the analysis is to monitor efficiency. This is especially true with the Pentium4 / Athlon debate since we can get 1.4GHZ Athlons.. The question is whether to purchase a 1.4GHZ Pentium 4 at significantly higher cost; to say nothing of the added cost of a 1.7GHZ setup.

    The difference is more dramatic between the P5-4 and P5-3, since you max out at about 1GHZ for the P5-3, and so I'd be inclined to believe you. The Athlon, however is not yet out of steam for its current. If it can best the P5-4 in 50% of the categories (including legacy apps.. e.g. modern ones), then the value of the P5-4 is limited, even if it can produce top-notch synthetic scores.

    The point is that it is not ignorant to normalize, so long as you look at the periferal factors. It's like having taking the average, but also taking the standard deviation. You do find useful information from such numbers.

  • by diablovision ( 83618 ) on Friday June 29, 2001 @06:54AM (#120207)
    It seems Intel may have bet the farm on Marketecture...20 stage pipeline to reach multiple gigahertz speeds, double pumped ALUs that run at twice core clockspeed, a trace cache of recently decoded RISC "micro-ops", and SSE2, almost 200 new floating point SIMD instructions that are supposed to give incredible performance. Yet the Pentium 4 has trouble against a lower clocked Athlon in many many benchmarks. []

    Intel is the market leader, but they shouldn't let their marketing team design their chips!
  • I may sound like a troll of sorts or anti Intel, but when it comes to high end scientific engineering does anyone actually use anything outside the realms of Sun, Irix, and Alpha?

    I do. For my master project, I've trained hundred of neural networks, each taking between an hour and 2 days to train. At my job, we're doing the same kind of stuff on linux and solaris PC's. I believe a lot of people do that too. PC's are so cheap compared to the other architecture, that it's still the best thing to buy for many types of computations.

    And by the way, training a neural network requires about one division for several millions of add/mul.
  • Does AMD have an optimizing compiler for the Athlon that you can plug into VC++? If so, it should have been included in the tests this article ran.

    No they do not. AMD uses Intel compilers for their SPEC scores since it is the best X86 compiler.

  • In that case, Intel could insert some obfuscated code to detect AMD CPUs into its compilers' output and then run delay loops on AMD CPUs to create a phony lack of benchmark performance.

    You seem to be confused. AMD has the choice of any compiler in the world to use when submitting SPEC benchmarks. They choose to use Intel's because it is the best. If Intel crippled support for AMD processors in its compiler, then AMD would use a different compiler. Of course, if AMD had compiler expertise they would develop their own compilers optimized for their chips. But they don't know how to develop compilers (and that will be quite a performance limiter for x86-64 since they will have to rely on GCC, which has terribl performance).
  • by VAXman ( 96870 ) on Friday June 29, 2001 @09:31AM (#120216)
    The uniprocessor-only P4. Intel spent years perfecting SMP on their earlier processors, and for what? So that AMD could beat them to the punch, running a 1.4Ghz CPU in SMP mode. Intel also embraced the slower-but-cheaper shared memory bus architecture, which is going to kill SMP performance in comparison.

    You are wrong. The DP capable P4 (known as Xeon) was launched in May, and was launched well before the DP Athlon was released. Moreover, you can buy real dual Xeon systems from Dell, IBM, Compaq, and the like, yet you cannot buy a DP Athlon system from any major vendor, since no major OEM's want it.

  • The length of the pipelines is not the main reason that the Pentium4 sucks. The main reason is that the chip is broken in several important ways, such that you need to rearrange your code specially in order to mitigate the broken stuff. This is straight out of the article [] you cited (great article, I agree!).

    Historically, if you took code for one processor and ran it on a later processor, the later processor would always do a better job of running it than the original. (The major, glaring exception to this was the Pentium Pro, which really sucked unless you optimized the code for it.) This is why Linux distributions such as Debian just optimize for the 386 and call it good -- most of the time, for most of the applications, you won't pick up very much performance by optimizing for a specific chip architecture. (By the way, you should rebuild your kernel with chip-specific optimizations. Your kernel is running all the time, and any savings will add up quickly. Of course, all the CPUs are so fast these days that few of us will really notice any difference even with the kernel.)

    But now the Pentium4 has so much wrong with it, that unless you rearrange the code specially, it chokes and underperforms. The Level 1 cache is actually a cache for decoded instructions, which is cool... but it is only 8K, which is insane! Sure, since the instructions were already decoded, the 8K cache is probably worth a bit more than a simple 8K instruction cache, but the Athlon has a 64K instruction cache! The Pentium4 has all these internal execution units, but it can only feed three of them per clock cycle from the cache, so most of them will be idle in any given clock cycle. And while earlier chips introduced cool features that would make code run really fast (bit-shifting was really fast, and there were special instructions like CMOVE) these all run dog-slow on the Pentium4.

    So, the Pentium4 runs really hot, and needs special cooling and a special power supply. Right now it needs expensive RDRAM. And it needs special optimizations to allow it to run at full speed. Summary: unless you really need its special features, buy an Athlon.

    When does a P4 beat an Athlon? Some specific situations where RDRAM is really appropriate, some specific situations where the SSE features really work (and assuming the code is optimized for it), and that's about it.

    Can a future P4 dethrone the Athlon? Maybe. Intel claims that the P4 is slower, clock-for-clock, than the Athlon for a good reason: because the P4 will reach really high clock speeds really fast. Some breathless press release I read said something about a 10 GHz version of the P4 within four years or so. Let's face it, the P4 can stay as broken as it is and still stomp the Athlon if Intel can really get the P4 going twice as fast or more than the Athlon! But I'll believe it when I see it. The current P4 goes into thermal overload and slows to half-speed if you work it really hard, and dissipates 73 Watts at 1.5 GHz; even with a die shrink I'll bet a 10 GHz P4 would melt itself into a puddle.

    Because the Athlon gets more work done per clock, and is available at clock speeds nearly as high as the P4, the Athlon is better than the P4 across the board. There are a few narrow situations where the P4 is better than the Athlon, but if you check the price/performance ratio the Athlon still wins.


  • by BradleyUffner ( 103496 ) on Friday June 29, 2001 @06:51AM (#120218) Homepage
    After reading the article it looks like Intel is much better at making compilers then it is at making it's processor. The article says that the intel compiler is a "masterpiece", and a work of genius. It looks to me like thier compiler is a lot more impressive then thier CPU.
    =\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\ =\=\=\
  • Remember than Chicago/Windows 4.0/Win93/Win95 was designed to create a transition OS to get the code to Win32 faster. Also, NT was build as a cross platform OS because MS didn't want to be dependant on Intel. Remember, everyone thought that x86 was nearly dead at that point.

    I assumed that the idea of J++ was for MS to have their own Java. That would give them tremendous platform independance. You would write "cross platform" Win32 code, meaning it would run natively on any MS OS. I had always expected that this was why MS bought into Java. An MS version would work on MIPS/PPC/Alpha/x86.

    Given that their RISC compilers were always a gen back, this never materialized. However, shipping a semi-compiled mode would have let them become truly cross-processor. I mean, think of it as Install Shield on crack... or a BSD port...

  • Just for the record, all you say also applies to Java (bytecode VM generally using JIT means per platfom optermisation from a single binary distributable).

    And of course, as soon as GCC can take advantage of whatever the latest CPU gizmo, everyone who runs an open source OS, or application can simply simply recompile for a performance boost.

    All the more reason, me thinks, for the chip vendors to help the open source compiler developers.

    Thad []

  • There will be a new version of the famous mprime. Look at The upcoming v21 includes advanced P4 optimizations. According to their mailing list, the P4 is faster than the Athlon. Cheers! Blip
  • A little Karma-whoring, swiped from intel's site []
    Compatible with Microsoft* Visual C++* and Visual Studio*, the Intel® C++ Compiler is designed from the silicon up to let developers easily take advantage of the performance and features of the latest Intel® architecture, including the Pentium® 4 processor.
    Intel is committed to customer support. See for further information on product support.

    Windows*NT*/98/2000 Full Product Electronic Delivery $399.00
    Windows*NT*/98/2000 Full Product CD Delivery $499.00
    Windows*NT*/98/2000 Upgrade Product Electronic Delivery $175.00
    Windows*NT*/98/2000 Upgrade Product CD Delivery $275.00
    Intel® Compilers for Linux* Field Test Intel® Compilers for Linux, field test versions, are available for download only. No CDROM versions are available. Not all of the GNU C language extensions, including the GNU inline assembly format, are currently supported and, due to this, one cannot build the Linux kernel with the beta release of the Intel compilers and the initial product release. The C language implementation is compatible with the GNU C compiler, gcc, and one can link C language objects files built with gcc to build applications. However, the C++ implementation uses a different object model than the GNU C++ compiler, g++, and due to this, C++ applications cannot use C++ object files compiled by g++. For further details, see the FAQs on the support site. Before using the compiler, we recommend you read Optimizing Applications with the Intel® C++ and Fortran Compilers for Linux to learn about the appropriate optimization switches for your application. You should have received the invitation letter that explains how to get started using the Intel compilers for Linux. All support issues, compiler updates, FAQ's and support information will only be available when you register for an account on the Intel Premier Support site. Please register for a support account at s.htm. To begin the process of downloading...
    Click Here! []
  • by nickovs ( 115935 ) on Friday June 29, 2001 @06:59AM (#120223)
    It seems to me that the tests that were used to give the Flops crown to the Intel CPU are a little biased. Surely for a fair test the Athalon should have been tested with the latest experimental AMD compiler as well.

    As CPU designs get more complex the compliers need to know more ane more about the exact nature of the CPU. Despite the lable of binary compatability given to the CPUs from AMD (and others), those who need to squeeze the best performance out of machines are going to need to run code that is complied for their specific machine. Despite the best efforts of the open source community most end users do not want to recomplile source, let alone spend time finding obscure /QaxW flags to make the most of the system. Really this should be a job for the OS.

    Maybe in the future we will see commercial code being distributed in such a way that parts of the code are compiled on the destination machine as the code gets installed. That way the code vendor can test a variety of complier options and not have to ship 42 different binaries for all the different CPUs in use.

  • Because Linux is built with gcc, and linux apps are built with gcc, and gcc doesn't have intel's compiler's SSE2 auto-vectorisation shenanigans.

    By next year when many programs are SSE2 enabled, AMD clawhammer should take back any lead Intel gets, because it use SSE2 as well.

  • Ah, but if Intel supplies a compiler that does, say, 50 percent of the job for programmers, then sales go up. Sales go up, the programmers do the other 50 percent. Joy, bliss.
  • Clock speed is only relevant to marketing droids and those stupid enough to believe them.

    The are processors (UltraSparc III) where the core pipeline is not clocked (called wave pipelining). There are caches that are double pumped; they do work on each edge of the clock instead of only latching on one edge.

    And an even clearer fact: different processors do different amounts of work per edge of the clock. If you want a _really_ high clock rate, put only one gate between each latch. That clock rate would be obscene. But half of the work done would be latching the values (assuming you could distribute the clock over so large an area).

    If you want to normalize anything, normalize over price. Unless you have stupid friends and compete over having the highest clock.

    Oh ya. Don't bother talking about FLOPS or MIPS. You'll just end up sounding stupid (and you need all the help you can get). Any benchmark not targetted to YOUR specific application is next to worthless.

    Heh, some processors don't even bother to dispatch NOPs. With a little hackert, they could ``execute'' as many NOPs per clock as the depth of their dependancy issue window.
  • bullshit, quasi-informative troll about BSD elided...

    Wow. You sound like a really smart guy. I bet you can think of all sorts of reasons why BSD is dying. Why don't you share some with the slashdot community?
  • So really the only people who need to get excited about this stuff are the driver writers and their brethren who write plugins for things like Photoshop is what you're saying.

    Somehow I'm not shocked.

  • Do you people pay attention... yes AMD is making better processors... yes people who really know choose AMD over Intel... therefor slashdotters choose AMD over Intel...

    But back to the real world: If you turn on a computer out there in happy fun land (aka "The Real World"(TM)) then odds are it will be a running Intel. Linux your precious kernel started out with optimized non-portable code for the i386. You geeks keep falling victim to the same trap year after year... just because it's better doesn't mean people will use it. Linux/BSD/Solaris/Irix/SVR4/MACOS/BeOS... is clearly better than Windows when you look at a track record... and yes in some cases can be *almost* as easy to use... but they have been winning the OS war since they made a *bad* ripoff of the Macintosh (read Xerox) GUI OS. MacOS was better, more stable, and quite cleaner... but Micro$oft had the market share and they won. People listen to money, and Intel is still the processor most people/companies would prefer buying. Hackers are one of the lowest demographics in the computing industry these days and people (outside of their community) don't pay much attention to them.

    Well, I guess thats it... go ahead return to your illusion and mod this down.

  • by andr0meda ( 167375 ) on Friday June 29, 2001 @07:38AM (#120240) Journal
    it's due to the fact that Pentium will flush the entire pipelines during branch-misprediction/pipelines stall. As a result Pentium III would out-perform Pentium 4 in some occasion, as the latter tends to lose more instructions when branch-misprediction rate is too high.

    Rumours have it that PentiumIV will have Simultaneous Multithreading(SMT) enabled, which let's the processoor run any instruction from any thread on any unit at any time. Supposedly this feature was allready included in current processor designs but not enabled because the P6-4 is not ready for SMP yet.

    AMD uses On-chip Multiprocessing(CMP) in Sledgehammer, which is basicly the sames as subdividing the resources of the cpu (registers & units) between the threads. The benefit of this technique is that the design can be kept simpeler and the clock can go faster than a similar monolithic chip with the same resources. On the other hand, a lot of resources are wasted if only one thread is operational in this setup.

    Needless to say, SMT has some problems too, for example, CMP lends it self much better for branch prediction through Slipstreaming than SMT does. You can find some good reading in this previous slashpost [] about how intel and amd deal with multithreading on their single/multiprocessor designs. To be taken with a bit of salt of course, but very sharp.

    My point is that if branch prediction in the form of Slipstreaming is implemented (and Jackson Technology seems to be that kind of SMT), the P6-4 problems with the excessive cache flushing are completely over, and SMT can take full advantage of the smaller RAMBUS latencies, easily outperforming a similar CMP setup like AMD has.
  • The 80186 was never used widely outside of embedded systems and was simply a slightly extended 8086.
  • Take a processor. It hits a branch instruction. While it is working out whether or not to take the branch, it keeps itself busy by executing instructions from one side or other of the branch. It gets it wrong, so when it realizes this, it throws away a bunch of work it has done. Hence branch misprediction it a Bad thing.

    One thing that you forgot - it takes more time to go back and run the other branch if there is a longer pipeline. Hence, a CPU with a long pipeline will sit there idle as the data makes it's way through the pipeline.

    To better visualize how a pipeline works I like to think of this little analogy:

    Have a line of people passing buckets of water from a well to a burning house. Given that every person works at a given speed, it requires them a defined amount of time to move the water from one person to the next. The more people present, the smaller the distance required to move the water. This allows them to move more buckets in the same amount of time (or operate at a higher frequency - just like the P4.) The problem is it takes longer for the water to actually get to the fire (assuming 20 vs 10 people working at the same frequency.) Now lets say there are two different kinds of water (a very hypothetical situation.) Should the wrong type of water be sent and arrive at the house, the guy at the house would have to tell the guy at the well to send over the correct type of water. Now with more guys in between the two, it'll take longer for the correct water to get to the house. While the water is in transit - the guy at the house sits wasting his time.

    So as you can see - more people increases the potential speed. The speed determines the volume of water being sent. This is great but if the wrong thing is sent it takes a long time because the correct "thing" has to travel through the enlarged pipeline.

    A long pipeline is great if you're running code that doesn't have a pile of "branch if" instructions in it. Performing an "add" on every byte in a 4MB file (think Photoshop) will result in very efficient use of the CPU. However, if you're running code with lots of "if then" statements then you run the risk of wasting a great deal of CPU time. This is where a smaller pipeline helps (or should I say doesn't cause as much damage.)

    The other big problem with a large pipeline is that it greatly increases the complexity of the chip design. More transisters result in the components of the CPU getting spread further apart - hence you need an even longer pipeline (think of the buring house example - the house just moved an extra block away from the well.)

    Overall, chips with smaller pipelines offer far greater efficiency. Look at a G3 PPC CPU. It has a 4 stage pipeline. Because of this it maxes out at 700MHz but it is faster then a PIII when comparing MHz to MHz. All this and it's a third of the die size and typically uses only 5Watts. You can also look at the Alpha with it's 7 stage pipeline. It might not operate as fast (MHz) as todays P4s or Athlons but it still offers incredible performance.

    The real advantage of the P4 will come with multimedia type applications. The problem is that it will quickly max out the memory bandwith. Now take an Athlon - it might not be quite as good for those same apps but so long as it can also max out memory bandwith you're not going to see a difference. As John Cormack (did I spell that right?) said in a receint /. posting - the new G4 is great but the main problem is memory bandwith. As CPUs double in speed this will become an even greater problem.


  • The main reason for this is that there's more to a car than just an engine.

    Having a smaller engine with more HP/Litre allows you to have a smaller (lighter) car. The reduction in torque becomes much less significant since the engine has less mass to accelerate. If GM made an small 4-cylinder engine with only 61 HP/Litre, and put it in a car the size of the S2000, it'd be terribly slow and wouldn't compete too well.

    If Honda made a 5.7L V8 engine, it probably wouldn't scale linearly, but I'm sure they can easily do better than 61 HP/Litre. Why haven't they? They probably 1) aren't interested in large V8 engined-cars, or 2) don't feel that it'd be a profitable market segment for them to enter, especially given their reputation for small, lightweight cars. They are planning to make a V8 NSX soon, though, although it'll probably be more like 4.0L.

    In the end, I guess it comes down to which kind of engine you prefer in a car, and in what kind of car: a small car with a small, high-revving engine (but not much torque), or a larger, heavier car with a large, powerful engine which concentrates more on low-end torque. If you like lots of torque, Honda probably isn't the company for you to be buying from.
  • by ackthpt ( 218170 ) on Friday June 29, 2001 @07:18AM (#120257) Homepage Journal
    Agreed, the market is in a slump and people shopping for computers are going to be bargain hunting for some time. Even more rumblings about layoffs at the ever-optimistic Intel, despite yammering on about how the downturn won't affect Intel, how they expect growth, etc.

    Cheap chips rule in a soft market and AMD has demonstrated the ability to produce wicked fast at cheap prices. This would seem to be the best evidence yet that Intel has lost it's way and the bureaucracy is in need of some serious house cleaning.

    Some blunders:

    Tying themselves legally to Rambus

    Talk of discontinuing the P3, their best mover.

    Pushing the 1.13GHz P3 out the door before it was ready and suffering the consequences.

    Slashing prices and subsidizing RDRAM just to move P4 product.

    The P4 may have some advantages, but imagine what it would be like if AMD had rolled it out... um hm.. It would have killed the Athlon alright, assuming the Athlon were Intel's. ;-)

    The truth is out there. []

    All your .sig are belong to us!

  • by Arethan ( 223197 ) on Friday June 29, 2001 @07:58AM (#120259) Journal
    Someone mentioned above that the Intel compiler is selling for a couple hundred bucks per license. I've been in the development market for a few years now, and I've used Intel's "optimized" compiler a few times already. It has a few flaws right out of the box, being that it will only work on Windows systems. It can act as a plugin for MS Dev Studio (which I must admit is a pretty slick IDE), but the bottom line is that Intel is charging money for something that they should be trying to GIVE away. If they want a leg up on the market, they should be making it VERY easy for developers to use their compiler when they build their applications. The result would be a lot more stickers on product boxes labeled "Optimized for Intel CPUs", making the cpu decision much easier for newbies.

    "Oh look, all of these games are optimized for Intel chips. They must be good!"

    Better yet, if they want their cpus to get on top of the server market, they should be releasing the source code for their compiler as well. This would let the gcc crew use the optimizations in their compiler creating better/faster *nix software. (Unix being the server platform of choice for more large companies I've worked with than I can shake a stick at. I won't get into why, as that will probably start a small war.)

    Bottom line, make the compiler free, and open the source, and Intel would definitely take off again.

    Until that day, though, I will stick with AMD since they have better prices for equal performance.
  • Everyone take this survey to get Dell to start offering AMD Athlon: _survey.htm?keycode=6Vc00&DGV
    But go to a different page before you paste it in, so that they won't know we're all coming from slashdot. :)
  • well....
    in the summer of 2000 it tried to push the aging "P6" architecture too far. The P6 design, or 6th generation of x86 processor which since 1996 has been the heart of all Pentium Pro, Pentium II, Celeron, and Pentium III processors, simply does not scale well above 1 GHz. As the aborted 1.13 GHz Pentium III launch this summer showed, Intel tried to overclock an aging architecture without doing thorough enough testing to make sure it would work. The chip was recalled on the day of the launch, costing Intel, and costing computer manufacturers such as DELL millions of dollars in lost sales as speed conscious users migrated to the faster AMD Athlon.

    From the article I linked before.
    /. / &nbsp&nbsp |\/| |\/| |\/| / Run, Bill!
  • Aye, you are right. That was what I always theory SMT should kick CMP as CMP would lose processing power by its simplistic division algorithm. However, it turns out quite like the it ture that SMT is not yet enable in P6-4, or CMP practically more feasible in real life? :)

    Thanks for the info.
    /. / &nbsp&nbsp |\/| |\/| |\/| / Run, Bill!
  • Sorry I've over-simplied my argument which may cause misunderstanding.

    Typical instructions take more clock cycles to execute in Pentium 4(not P4). Longer and more pipelines doesn't mean more instructions can be fed and excuted in one clock cycle. Also the longer pipeline used in the Pentium 4, flow control operations (such as branches, jumps, and calls), the longer the time needed to fill up the pipelines.

    (reminder: it's very simplifed view)in theory the execution units can process 9 micro-opts per clock cycle, thanks to the problem in cache design, it can only feed 3 micro-opts per clock cycle.

    Pentium III's decoder can feed up to 3 instructions and 6 micro-ops (4+1+1) to the core per clock cycle.

    Pentium III is like a motorcycle engine in a motorcycle. Pentium 4 is like upgrading the same engine to run a bus.(just ignore it if you think the analogy is wrong ^_^)

    I might miss some points. Please comment.
    /. / &nbsp&nbsp |\/| |\/| |\/| / Run, Bill!
  • Hey man, be fair, don't just take the graph in favour to your conclusion.

    How about this [], this [] and this []?

    Don't believe everything you read.

    Assumed you believe everything in aceshardware, do you believe the graphs above? :D
    /. / &nbsp&nbsp |\/| |\/| |\/| / Run, Bill!
  • by jsse ( 254124 ) on Friday June 29, 2001 @06:58AM (#120274) Homepage Journal
    Can the P4 dethrone the Athlon


    Let me explain this way: Pentium III has 6 10-stage pipelines for out-of-order superscaler execution, while Pentium 4(avoid using short-form P4 - Pentium 4 is in P6 family) has 9 20-stage pipelines.

    More pipelines more stages sounds good huh? Unfortunately, in some benchmark tests Pentium III beats Pentium 4, it's due to the fact that Pentium will flush the entire pipelines during branch-misprediction/pipelines stall. As a result Pentium III would out-perform Pentium 4 in some occasion, as the latter tends to lose more instructions when branch-misprediction rate is too high.

    Althon, on the other hand, only flush 1/2 pipelines on averages. They really need to fix this fundamental design glitch before they could beat Althon.

    If you are very interested in this subject you can read this article []. You can understand why Intel cannot giveup Pentium III in favour of the market of Pentium 4.
    /. / &nbsp&nbsp |\/| |\/| |\/| / Run, Bill!
  • Your sig is quite funny considering your post is mostly garbage. Did you read the article? Intel chips actually perform fairly poorly on current Microsoft compilers. That's one of the primary arguments of the article. The P4 shines when used with (surprise) Intel's compilers.

    Microsoft's current compilers, while inferior to Intel's on the new Intel processors, are better with Pentium Pro style architecture than gcc, but that's just because of different development goals (gcc tries to serve everyone, Microsoft can focus on a much more limited set of CPUs). Its not a grand conspiracy or anything.

  • I'm sure AMD knows intel's chip designs inside and out, almost as well as intel knows them, and vice versa. Its mostly patents and such that protect their "innovations" at that level.

    Nevertheless, like I said, I'd be shocked to see Intel open source their compiler...But I wouldn't be shocked (and I think it makes a lot of sense for them to do this) if they started giving away the Win32 binary for free (as in beer). Otherwise the majority of developers are going to keep using Visual C++ and/or Cygwin/gcc and Intel's chips are going to continue to look inferior to AMD's, even if that view is not entirely accurate.

  • by geomcbay ( 263540 ) on Friday June 29, 2001 @07:11AM (#120279)
    The most likely solution for the short term is that developers will compile multiple versions of DLLs (or .so's in Linux/UNIX space) holding their hotspot code and use dynamic library linking to load in the right one after doing a CPU detection routine. This sort of thing is already being done to a certain degree with Windows based games that might support 3dNow, SSE, SSE2, etc.
  • by geomcbay ( 263540 ) on Friday June 29, 2001 @06:57AM (#120280)
    Intel should consider giving their compiler away. Currently they charge hundreds of dollars per license for it. Considering their market in compiler tools is relatively small beans, you'd think it would make more sense for them to just give the compiler away to entice developers to use it and thus wind up with executables that really showcase the next-gen Intel processor's speed.

    I won't even get into the argument about how it might help them to Open Source the thing so that parts of the technology might be rolled into other compilers like gcc, because I just can't imagine that happening anytime soon.

  • by bryan1945 ( 301828 ) on Friday June 29, 2001 @06:48AM (#120285) Journal
    In this test A beats B, but in this test B beats A, etc. etc. All these different tests try to measure some specific performance parameter, but as hard as you try to standardize the rest of the equipment to isolate that one parameter, you just can't in the real world. And that is the true test- how well does the entire system run? You could slap a P4 3GHz onto a 33MHz bus (well, not really, but you get the point) and get the equivalent performance of a 3-toed sloth. That or the bus wires will glow.

    As for the SSE extensions, Intel tried this first back with MMX, and Apple is trying it now with AltiVec(sp?). Yes these extension can help, but only after software is optimized for them. It not a case of "drop 'em in and watch out!" It takes time to develop.

    Of course, all of this is just marketing. Kinda like the MHz wars. Intel needs some positive press after that oft quoted test where the P3 trounced the P4.

  • if(cputype == ATHLON) {

    Using this processor specific optimixation for the Athlon chips, the Pentium 4 has managed to outrun the Athlon. Intel's compiler cannot be expected (realistically) to generated optimized code for the Athlon. Any of their comparisons based on their compiler should be highly suspect.

  • by Chakat ( 320875 ) on Friday June 29, 2001 @07:10AM (#120289) Homepage
    Intel's working on a Linux compiler with all of the P4 goodness []. Although it's in beta right now, you can bet your sweet butt your going to pay for it once the program gets out of beta. Intel may have good compilers, but they don't give 'em away
  • Interesting results! Looks like heavily optimizing one's compiler pays huge dividends in terms of processing power.

    There's an important question though. The article used the MS compilers exclusively, with the best results coming from the Intel plug-ins - since these are apparently the industry standards. However, I'm at a university, and everybody I know is using gcc. We would be very interested in the kind of performance that is displayed here. Does gcc keep rigorously up to date with the most modern CPU technology, or does it lag (and if so, how much)? How long until these optimizations will appear in a release of gcc?

  • The P4 is only optomized for a Microsoft compiler. It's true. I was doing some consulting work for a major fortune 100 company and they were looking into migrating from Windows to openBSD. However after they did some testing they found that their database applications were running 15 - 20% slower on BSD than Windows. I expected them to be a little slower due to the threading problems wiht BSD, but not that slow.

    After one full week of testing we found the problem wasn't with BSD at all, it was with the P4 on BSD. It would seem Intel has an enhanced instruction set cache which is only available on Microsoft compilers. This is not a trivila thing to implement so I doubt the OSS camp will be able to migrate it into their compilers anytime soon.

  • I don't get it lately with processors. Why do we need all of this SSE, SSE-2, 3dnow, mmx stuff? Granted, I don't have a degree in computer engineering (yet.) but doesn't it seem that processors are becoming more and more proprietory?

    To me, it seems like we're moving toward a time where there will be different versions of o/s's for each processor. (myos for intel / myos for amd) It's going to be increasingly hard for vendors to be able to write code that will be optimized for all processors.

    Anyone else think this way? does this make sense?
  • by GreyOrange ( 458961 ) on Friday June 29, 2001 @06:39AM (#120300) Journal
    If they manage to get there chip working properly yes, if it keeps on malfunctioning and overheating no.

  • Yeah AND if Intel lowers thier prices...

BLISS is ignorance.