Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD

Benchmark Program Rewritten to Favor Intel? 228

BrookHarty writes "Interesting article over at Van's Hardware, that BAPCo the maker of the SysMark benchmarking program, has re-written its SysMark 2002 benchmark program in favor of Intels P4. AMD joined BAPCo in order to "correct" these "broken" results. AMD reports that BAPCo's SysMark 2002 (written by Intel Engineers) is a collection of tasks to summarize "Real World" performance. Interestingly, these tasks are selected for Intel's favored performance, while removing certain tasks that favor AMD. Vans Hardware has additional information on BAPCo's Shady history."
This discussion has been archived. No new comments can be posted.

Benchmark Program Rewritten to Favor Intel?

Comments Filter:
  • Clang! (Score:1, Redundant)

    by MaxVlast ( 103795 )
    That sucks. If you can't have the fastest processor, have the fastest benchmark program!
    • Re:Clang! (Score:2, Insightful)

      by Anonymous Coward
      point 1.

      amd is, mhz for mhz, dollar for dollar, pound for pound (the currency & weight ;-)

      is THE FASTEST x86 cpu out there.

      If I give you $300 and tell you to go buy the fastest x86 cpu you can for the money....it's an AMD cpu... period.

      point 2. i own 2 amd based systems, and about 7 intel based systems. i'm not biased, i have likes /dislikes for both companies...but CURRENTLY, i'm going to recommend amd 90% of the time. What most Intel FANS (read: biased) DON'T UNDERSTAND, is while they are bashing AMD, if it were not for AMD they would be paying a lot more for their precious little Intel processors. Very likely around $800-$900 for a dinky 1 Gigahertz P3 right about now. How can you read slashdot, and know how important competition is and still be an Intel cheerleader? They must be ignorant or stupid.

      Point 3. it's simple. a competitor like AMD, simply means that if you prefer AMD, you are getting a damn good processor with a LOT of value. If you prefer Intel, while not quite the value of AMD,you are still getting a very good cpu, with good value. Because of AMD, we the consumers are enjoying greater value.

      Without AMD, I can guarantee you that the current situation would not exist.

      Intel would definitely be the equivalent of Microsoft....a monopoly. Trying to control all aspects of the industry, charging rediculous prices, trying to build their own 40 billion in liquidity.

  • Big deal (Score:1, Flamebait)

    by ObviousGuy ( 578567 )
    The P4 has that long pipeline that allows it to speed up quite a bit, so long as the branch prediction doesn't blow it. As compilers become tuned to exploit this, it's plausible that the Athlon's performance is going to lag quite a bit more than it already does. That there is some benchmark out there that is specifically designed to show off this strength of the P4 is no real surprise to anyone, is it?

    Heck, this is what Sun was doing a few years back. It's almost an industry standard to have your own "benchmark lab" on the payroll.

    Besides, AMD has always been the value chip company. You can't expect them to keep up with Intel forever.
    • Re:Big deal (Score:3, Informative)

      by GigsVT ( 208848 )
      I don't think there is much motivation on the part of compiler writers to optimize for this particular implementation of the x86-32 ISA. This isn't like previous chips, where new cache handling opcodes were added, which compilers could use if available. I've talked to people much better versed in compiler writing than myself, and they all seem to agree, when it comes to "optimizing for P4", their answer is going to be "don't hold your breath".
      • I don't think there is much motivation on the part of compiler writers to optimize for this particular implementation of the x86-32 ISA.

        Over 80% of new computers being shipped are P4 based. Therefore, applications will be optimized for P4. Not optimizing your application for P4 would be look a hardware vendor not releasing a Windows driver for their device.
    • The thing about the Athlon line of processors is the FPU, which blows the P4 away.

      The P4 is better with the memory throughput, considering it was designed for using RAMBUS in mind, hence the more memory bandwidth you throw at it, it utilizes it (SD/DDR RAM is like using a handbrake on it).

      The new Thoroughbred Athlons make AMD regain the performance crown, and the Athlon has the Barton revision to go yet before Hammer.

      AMD maybe considered a value chip company, but that doesn't mean they cannot produce excellent CPU's.

    • by Anonymous Coward
      They're not just keeping up with intel, they are significantly faster in most real-world applications. Over 90% of systems dedicated to rendering and scientific computing sold in the last 2 years were Athlons (they're actually replacing SGI and Sun in thse markets because they are so much cheaper and actually offer better floating-point performance).

      All the P4 is good at is moving memory around. And for this it seeds RDRAM (ie, very fast memory). Replace that with DDR and the P4 turns into a very expensive snail.

      Even with RDRAM, the P4 is still slower than the Athlons on most real-world tasks. IMO, the only valid reason to buy a P4 is Quake.

      I have several Athlon XPs and P4s and the P4s just drag themselves compared to the XPs. In fact, even the PIIIs feel faster than the P4s. And I'm not even talking about the Athlon MPs (that simply wipe the floor with the P4).
    • If this is the most widely used benchmark, it sounds like a big deal to me.

      If someone says that they are independent, and they are not, then this is a big deal.

      Even if this is something that is expected because it is 'an industry standard', it doesn't make it right.

      Let's make a big deal out of this so that people that don't have the same expectations do not fall for these bs results.
    • Re:Big deal (Score:2, Informative)

      by njdj ( 458173 )
      it's plausible that the Athlon's performance is going to lag quite a bit more than it already does.

      It doesn't. The fastest X86 processor is currently made by AMD. See a non-BAPCo performance comparison.

      AMD has always been the value chip company. You can't expect them to keep up with Intel forever

      Are you an Intel employee? Intel isn't keeping up with AMD. The P4 is underperforming, as well as overpriced. Take a look at the web page referenced in the lead story.
    • Re:Big deal (Score:5, Insightful)

      by ergo98 ( 9391 ) on Saturday August 24, 2002 @09:27AM (#4132964) Homepage Journal
      Intel has used the "once compilers catch up..." scam for years, and every time people find themselves with a long obsolete processor by the time the software the theoretically exploits it arrives.

      My general practice is to ignore any synthetic benchmarks because they represent no real world value whatsoever: Instead I look to application benchmarks, like compressing divx movies or rendering 3D scenes, if that was the use that I had in plan for my PC.
    • Re:Big deal (Score:2, Informative)

      For a year and a half AMD managed to blow Intel out of the water in the performance arena, from the time they both reached 1.0GHz with the P3 and Athlon, to the time Intel started releasing Northwood core P4s with the extra 256KB L2 cache. The longer pipeline of the P4 has nothing to do with performance. All it does is enable the processor to ramp in clock frequencies easily. A general rule of thumb is the longer the pipeline, the lower the IPC (instructions per clock/cycle) and the LOWER the actual number-crunching performance.
      • Re:Big deal (Score:2, Informative)

        Actaully AMD had the performance lead for longer then that. They took the performance crown away from Intel the day that the Athlon was first released, since it came out at 650MHz when the fastest PIII was only 600MHz, and the Athlon was, at that time, just slightly faster clock for clock then the PIII. AMD kept the clock speed lead and increased their clock-for-clock performance over the PIII up for the next few years. Intel only just started to catch up with the P4 2.0GHz (which was released just a little bit before the AthlonXP, ie when the fastest Athlon was only at 1.4GHz).

        Basically for the last 3 years (since the release of the Athlon), AMD has had the fastest x86 chips for about 2 years. Intel has had the fastest x86 chips for about 6 months, and for the remaining 6 months it's been too close to tell which was faster.

        As for the P4s long pipeline, I'd say that it WAS largely responsible for increasing the performance because it allowed Intel to clock the chips so damn high. They clocked the P4 up to 2.0GHz easily on a 180nm fab process. Compare this to the PIII which they struggled to get up to 1.13GHz on the exact same 180nm process (and that took them until 1 year after their first attempt failed miserably and had to be recalled completely). AMD did slightly better with their Athlon design, but it still was only able to clock up to 1.73GHz on a 180nm process, and they had a more advanced process then Intel did in some ways (ie they were using copper interconnects).

        Long story short, performance is determined (in an overly simplified way) by IPC * clock speed. With the P4, Intel looked to sacrafice IPC slightly to dramatically increase the clock speed with the goal of overall faster performance. When compared to the PIII at least, they definitely succeeded. Compare the fastest 180nm process PIII (1.13GHz) to the fastest 180nm process P4 (2.0GHz) and which do you think is faster?
        • Hoser, the first Athlon was a 500 Mhz version of the Slot A. I know because mine is still running sweetly after all this time. I do have a new Athlon 1400 though and it is mighty nice too.

          Heh, I see no reason to pay any premium to Intel for a processor with no immediate benefits.

          Anyway, nice post.
    • by Anonymous Coward on Saturday August 24, 2002 @09:47AM (#4133014)

      As compilers become tuned to exploit this, it's plausible that the Athlon's performance is going to lag quite a bit more than it already does. That there is some benchmark out there that is specifically designed to show off this strength of the P4 is no real surprise to anyone, is it?

      That's not the complaint at all. Read the linked article. The complaint is that Sysmark 2002 has been systematically altered relative to Sysmark 2001 so as to favour the P4 over Athlon.

      For example, the PhotoShop test in Sysmark 2001 had 13 filters, of which 8 run faster on the Athlon and 5 faster on P4. The Sysmark 2002 PhotoShop test has 6 filters, of which 3 are filters from Sysmark 2001 on which P4 wins and the other 3 are additions on which the P4 also wins. The 8 filters on which the Athlon does better have all been removed.

      There are several other examples in the article. Read the article

      BTW, an interesting point is that this whole thing is basically an AMD publication that AMD have chosen to proxy via Van's. Van is at least open about it. The AMD presentation containing all the information in that article is linked at the end and is available here [vanshardware.com]

    • Re:Big deal (Score:3, Insightful)

      by Dr. Spork ( 142693 )
      I'm sorry, but I feel like I have to point out that your statement "Athlon's performance is going to lag quite a bit more than it already does" seems to imply that you actually think the P4 has the performance lead. You must not be reading slashdot much (check out 2600+ benchmarks and weep).

      Also, the story didn't imply this was a big deal. It only remids us of all the dirty tricks Intel is forced to resort to when they try to maintain a market lead with a grossly inferior product. As long as people know this, benchmark-cooking is really no big issue.

      • The P4 still has quite a performance lead. I just took your advice and checked the 2600+ benchmarks (on tomshardware). The Athlon 2600+ only beat the 2.8 GHz P4 in 3 of the 28 tests. The Athlon beat the P4 in just under 10% of the test- that doesn't sound like the P4 is a "grossly inferior product" to me. And BTW- those benchmarks don't include the SysMark tests that are in question here.

        Intel's performance lead is even more impressive when you realize that the 2.8 GHz P4 is available to customers starting Monday, but the Athlon 2600+ won't be availble to the OEM's for another month (and even longer before customers have it).
    • Re:Big deal (Score:5, Interesting)

      by Sivar ( 316343 ) <charlesnburns[@]gmail . c om> on Saturday August 24, 2002 @11:18AM (#4133275)
      Besides, AMD has always been the value chip company. You can't expect them to keep up with Intel forever.

      AMD has had a superior (in design) processor architecture to Intel since the K6 was released (though the K6 had mediocre FPU performance, the design was still more elegant--ask any x86 assembly programmer). The Athlon has given the P2, P3, AND P4 a run for its money, and early benchmarks of the hammer would seem to indicate that the expensive Itanium 2, which almost nobody actually uses, is going to be outrun as well.
      The Pentium IV's really looong pipeline does allow the P4 to run at higher clockspeeds, but the branch prediction you mentioned is instant death. Branch mispredictions happen VERY frequently in any CPU (note the K6 had the most sophisticated branch prediction unit up until the "XP" series of Athlons) but with the Pentium IV, a single branch prediction requires up to 20 full clock cycles of work to be discarded.
      The Pentium IV has other questionable design desisions that hurt performance as well. It has 8K of L1 cache, the same amount found in the ancient 486 processor, whereas the Athlon has that amount squared and doubled (128K). Current P4's have more L2 cache, but L2 cache is less important and slower. (Note though that the P4's L2 cache is particularly fast L2 cache)
      The P4 has buffers to remember a series of decoded x86 instructions so that it does not have to decode them again--these are almost required because of the terribly long pipeline--but it doesn't have enough to speed things up in server environments. Most servers execute a wide variety of instructions such that the buffered instructions get very little use before being replaced by new instructions. This is even more a problem on systems that run many different applications at once, but this problem can be demonstrated just with DB servers (which use plenty of instructions) as the P4 tends to not scale as well as the Athlon MP when a second or third task is added (such as mail serving, web serving, etc.)

      One dissapointment that I had with the Athlon is that AMD never used the excellent EV6 bus to its fullest. Athlons are superior in multiprocessor capabilities because different processors needn't share access to the memory bus. On Intel SMP setups, even on P4 Xeons (Which, IMO, are inferior to P3 Tualatin chips by the same company) when one CPU accesses main memory, it locks main memory for the other CPUs. All other CPUs have to sit and twiddle their transistors while the main memory is on use by only one CPU.
      On AMD SMP setups, ALL processors can simultaneous access memory, merely sharing the bandwidth simultaneously. So, if one CPU is only using 100MB of memory bandwidth, the rest can be used by other CPUs at that time.
      Unfortunately, this doesn't really matter much with only two CPUs, which is the largest AMD configuration you can get. You can, of course, see it in action with 8+ CPUs on EV6 Alpha setups (AMD licensed the bus from DEC's Alpha team) but Alpha setups are expensive as hell and are a dying breed.
      If AMD had created a quad or 8-way setup, we would see the true power of a good design.

      Fortunately, the Hammer has an even better design (one made by AMD no less) on an even better CPU. I fully expect the Hammer series to wipe the floor with all Xeons and possibly the Itanium 2 because of its design. An integrated memory controller that will tremendously drop memory latency, twice as many general-purpose registers of twice the size (Much less pushing and popping, for those that know some assembly) and, unlike the big vendor 64-bit processors, the ability to split half of the general purpose registers into chunks of 16 and 32 bits when huge numbers (2^64) are not needed. (On an Alpha/SPARC/R12000, if you want to store the number "42" you must use all of a register that can hold values up to 18,446,744,073,709,551,615. A bit wasteful)
      • Re:Big deal (Score:3, Insightful)

        by VAXman ( 96870 )
        The Pentium IV has other questionable design desisions that hurt performance as well. It has 8K of L1 cache, the same amount found in the ancient 486 processor, whereas the Athlon has that amount squared and doubled (128K).

        Obviously you flunked your freshman-level computer architecture course. The P4 8K L1's 2-cycle load-use latency is 50% better than Athlon 128k L1's 3-cycle load-use latency (not even accounting for P4's clock speed advantage). The difference in hit rate between 8k and 128k is only about 5% meaning that it is substantially faster to go with the small/fast cache than the big/slow cache. Do the math - even an infinitely large 3-cycle load-use cache is slower than an 8k 2-cycle load-use cache.

        Cache size comparisons are more meaningless than megahertz comparisons. Whenever somebody tries to justify a big cache size without looking at performance, just walk away. AMD is playing marketing games with their slow-as-molasses (but massive) L1 cache.

        I won't bother to address the rest of the technical errors in your post...
        • Re:Big deal (Score:5, Informative)

          by Sivar ( 316343 ) <charlesnburns[@]gmail . c om> on Saturday August 24, 2002 @12:24PM (#4133485)
          Obviously you flunked your freshman-level computer architecture course. The P4 8K L1's 2-cycle load-use latency is 50% better than Athlon 128k L1's 3-cycle load-use latency (not even accounting for P4's clock speed advantage).Obviously you are imagining things, as I never said that was not the case. Latency is important, but it doesn't matter if the cache size isn't large enough to fit enough code in to enjoy the low latency.
          The difference in hit rate between 8k and 128k is only about 5% meaning that it is substantially faster to go with the small/fast cache than the big/slow cache.
          Really? That's interesting, and here's me wondering why both AMD and, other than in the P4, Intel have wasted so much money adding more cache memory.

          Because you seem to be such an expert, so why don't you go ahead and list a few common programs for me that have a working set of less than 8K--the size that will fit into the tiny L1 cache. Can't find any? Gee, I guess that makes the size of the cache pretty important then. When a program's working set has to be swapped in and out between L1 and L2 cache, suddenly that latency doesn't much matter. Of course, you may feel free to prove to me that the P4 can run addition loops faster. Those will fit into about 8k.

          Do the math - even an infinitely large 3-cycle load-use cache is slower than an 8k 2-cycle load-use cache.
          Who was it again flunked their freshman computer architecture course? You're saying that if the Athlon had 512MB of L1 cache that the system would be slower than the P4 and it's 8K of lower latency cache?
          What math is it that I should do? Do you know what the working set of a program is?
          Having a tiny amount of cache is analogous to having a tiny amount of RAM. Put 32MB of low-latency RAM in your system. Overclock some DDR SDRAM to 200MHz (AKA "400MHz" by people that don't understand clock speeds) and set it to CAS2. Tell me how your system performs. Just as your system will have to swap just about all running code to disk, the Pentium IV will not be able to contain the core loops of the various running programs in L1 cache. The vast majority will have to be dropped to L2, which is significantly slower and higher latency, kinda defeating the purpose of that 8k of fast memory, no?
          Working sets that cannot be fit into the P4's 256k or 512k or L2 will then be relegated to main memory and moved to L2 then L1 when the data is executed, and anything that won't fit in main memory (very rarely which includes the working set of a program) will be swapped to disk if the platform supports virtualizing memory.

          In closing, your comment was surprisingly brash and conceited, not to mention rude and totally innacurate. Thankyou.
          • Re:Big deal (Score:3, Informative)

            by VAXman ( 96870 )
            According to Hennesy & Patterson, 2nd Edition, page 391, the total miss rate (for SPEC92) of a 8k 4-way set associative cache (like the P4's) is 2.9%. The miss rate of a 128k 4-way set associative cache (like Athlon's) is 0.6%.

            The hit time for P4 is 2 cycles, and for Athlon it's 3 cycles. The L2 hit / L1 miss is ~10 cycles for both. Everything further out is approximately the same so we can ignore it for simplicity.

            So, the average memory access time for P4 is (0.971 * 2) + (0.029 * 10) = 2.2 Cycles. The average memory access time for Athlon is (0.994 * 3) + (0.006 * 10) = a little over 3 cycles.

            Suppose Athlon had an infinite size L1 cache (or 512 MB if you like to use numbers). The highest hit rate it could ever achieve is 100% (actually slightly less, since you cannot eliminate complulsory misses). The average memory access time would then be 3 cycles - which is higher than P4's 2.2 cycles!

            BTW, Paul DeMone wrote a pretty good article [realworldtech.com] about P4's L1 cache.
            • Thankyou, that was a much more informative post than your previous one, and probably more informative than mine as well. :)
              Regarding the comment about infinite cache, your message seemed to imply that latency was all that mattered, and that an Athlon with a huge (say, 512MB) cache would be slower than a P4 with it's 8K of faster cache. Seeing as how hust about everything would be run entirely from L1 cache on the Athlon, that seemed rather silly. Perhaps I misinterpreted what you were trying to say.
              Indeed, the L1 cache of the Pentium IV is extremely fast. The benchmarks of handling data sizes of 8K are astonishing, but size does matter as has been seen by increasing the L2 cache of the exact same Pentium IV.
              As I understand it, Intel chose an 8K cache both to reduce transistor count and to reduce L1 cache latency, which worked, but is of dubious real value. (it is difficult to tell for sure unless Intel makes a P4 with a larger, slower cache, which I doubt)

              The purpose of my original post wasn't really to focus on cache size vs. speed, but to highlight questionable design decisions made with the P4. There are other problems (what I consider problems) with the core as well, which I didn't highlight. For example, why is bit shifting so incredibly slow on the Pentium IV? It's faster on just about any other Intel or AMD processor and has long been recommended as an optimization for doing multiplication/division of numbers whose values neatly fell on bit boundries (256, 65,536, etc.) and now this optimization can actually make code /slower/ on the P4? Eh?
              Looking at the execution time of various instructions, the P4 has taken quite a few other steps backwards as well.

              I remember when people were criticizing AMD for calling the Athlon a 7th generation processor. The rationale was that the P2 was about 30% faster per clock than the Pentium Classic, and the Pentium was about 30% percent faster (this all depends on the code of course) than the 486, yet the Athlon was not 30% faster per clock than the Pentium 2 or K6.
              I wonder where all of these people are now that Intel's "7th generation part" is not only not 30% faster, but is actually 30% slower in most tasks, on average.
              Strange world we live in.
          • he's just trolling for his employer. He always does that.
      • corrections (Score:3, Informative)

        by RelliK ( 4466 )
        The Pentium IV's really looong pipeline does allow the P4 to run at higher clockspeeds, but the branch prediction you mentioned is instant death.... a single branch prediction requires up to 20 full clock cycles of work to be discarded.

        The situation is not quite as dire due to P4's trace cache (you actually addressed that later in your post). Nevertheless, your point stands.

        On Intel SMP setups, even on P4 Xeons (Which, IMO, are inferior to P3 Tualatin chips by the same company) when one CPU accesses main memory, it locks main memory for the other CPUs. All other CPUs have to sit and twiddle their transistors while the main memory is on use by only one CPU. On AMD SMP setups, ALL processors can simultaneous access memory, merely sharing the bandwidth simultaneously. So, if one CPU is only using 100MB of memory bandwidth, the rest can be used by other CPUs at that time.

        P4 Xeons (as well as P3s) have a shared memory bus. That is, multiple CPUs share the bandwidth of the 400MHz or 533MHz bus when accessing memory. However, Athlon has a point-to-point channel for each CPU. That is, each Athlon CPU has the full bandwidth of the 266MHz (soon to be 333MHz) memory bus, regardless of how many CPUs there are in the system. This means that beyond 2-way SMP systems, Athlon has a significant advantage in memory bandwidth over P4.

  • ...is AMD going to rewrite Sysmark to favor AMD?
  • hmmm.... (Score:1, Funny)

    by Anonymous Coward
    does it means that sysmark`s benchmarks is to intel as arthur andersen`s audits were to enron?

    • does it means that sysmark`s benchmarks is to intel as arthur andersen`s audits were to enron?


      It otherwise might, but this possibility may be somewhat mitigated by the reality that being 'in bed with finance executives' registers as just slightly less repugnant than the prospect of being 'in bed with benchmark coders.'
  • should be open. (Score:5, Insightful)

    by GoatPigSheep ( 525460 ) on Saturday August 24, 2002 @09:12AM (#4132920) Homepage Journal
    Obviously, the best bet for cpu benchmarks would be an open-source one compiled using a standard compiler. This is a case where open-source really shines.
    • Re:should be open. (Score:2, Interesting)

      by ObviousGuy ( 578567 )
      Wouldn't a better CPU benchmarks be taken by using the chipmakers' own compilers?
      • if you are measuring raw performance of the cpu, then you would want the program to not be optimized for any cpu.
        • Re:should be open. (Score:2, Insightful)

          by AvitarX ( 172628 )
          But you don't want to messure toally raw performance. You want to messure something approximating how it is going to do what you want it to do. so the bench mark should really be compiled however the vendors most often compile there software.
        • I think if you want real-world performance marks, it makes sense to use real-world apps. If you want to know how much power you can squeeze out of a chip, though, you ought to go to the source and get compilers that are designed to squeeze that power out.
        • But there is a chance, a good one, that a generic benchmark will not reflect real world performance due to random chance in the way the compiler works, or the test you are using.

          Suppose you decide to use Distributed.net as part of your benchmark suite. Certain processors (MIPS) do not have a hardware implementation of a certain operation (rotate left I think it is), which makes them seem extremely slow when you look at Dnet numbers.

          I don't think benchmarking will ever be boiled down to a simple solution, there are always little complexities which call any numbers into question.
          • Re:should be open. (Score:4, Interesting)

            by GoatPigSheep ( 525460 ) on Saturday August 24, 2002 @09:36AM (#4132986) Homepage Journal
            Well in this case the comparison is between two x86 cpu's, the athlon and the pentium4. Both would support standard x86 instructions. If you want to measure how fast the cpu is you would want the program to be unoptimized. Perhaps SSE would be fine since both cpu's support it.

            Using optimizations wouldn't be fair unless you had a good idea of the percentage of programs that ARE optimized for one or both cpu's. Many new programs are optimized for both cpu's, such as Cubase SX, a software studio program. I suppose you could use one of those programs as a benchmark in addition to the raw unoptimized open-source one so you can get an idea of how well the cpu performs with or without it's appropriate optimizations. Also, it makes a difference wether there is a free version of the optimized compiler, because if there isn't, there is a higher chance that programs made by individuals at home (who can't afford a 500$ compiler) would not be optimized.
      • Comment removed based on user account deletion
      • No. (Score:4, Insightful)

        by FreeUser ( 11483 ) on Saturday August 24, 2002 @11:35AM (#4133318)
        Wouldn't a better CPU benchmarks be taken by using the chipmakers' own compilers?

        No.

        The chipmaker would simply then optimize their compiler for the benchmark(s) in question, rather than for code more generally. In other words, what you suggest would still allow the chipmaker to cheat.

        In order to have complete transparency in the benchmarking, both the benchmarks and the compiler should be open source (ideally free software, so that anyone can run and verify the benchmarks as well, allowing repeatable experimentation in the broadest scientific sense). If the chip maker wishes to submit optimizations to such a compiler they would be free to do so, since any such optimizations would in turn be open source (or free software) and subject to peer review.

        A good candidate would be gcc, which runs on numerous platforms, and on several operating systems on AMD and Intel hardware.

        Cheating would be much harder in this case, perhaps even impossible, something we need given the sordid history of benchmarking by all parties involved (except perhaps AMD? Can anyone recall an instance where AMD has cooked results? I ask because their current chip rating system is extremely conservative ... almost the antithes of what Intel is trying to do. Has this been a longstanding strategy on AMD's part?).
        • I think it's not so easy, at least in the world that we live in.

          My naive idea about how chip-features are designed is that the hardware people meet the software people and a discussion goes like this:

          We could make a longer pipeline or add more registers or whatnot.... for about the same money/silicon-real-estate/complexity. When would our users and their applications benefit the most.

          So the chip is designed with a specific compiler in mind (or at least by people who will have a specific idea of optimizing for a compiler) which in turn is optimized for a specific workload.

          And then when you are asked about producing a "real world benchmark" you will most likely create a environment which meets the criteria that went in the design of the compiler which in turn matches your CPU very well..

          So I would not rule out that Intel tried to cheat, or at least tried to have a benchmark that matches their CPU best but much of this comes from the way the processors are designed.

          [Of course, that's just my naive Idea about the process and I will accept a paid-for trip to intel-HQ with labs-visit and expensive dinner to get a more objective oppinion about that... ;-) ]
        • gcc would be a shitty choice for your benchmark because so much more optimization goes on for particular CPUs than for others. It might be effective in benchmarking between intel-x86 and amd-x86 but when you got to sparc, say, it would be horrible.

          Application benchmarks use (supposedly) the same code as the applications which they mimic. When this is the case, they should use the same compiler which the application developer uses. Anything else is wankery. This is the only way to give an accurate projection of CPU use.

          CPU benchmarks should be written in assembler and optimized for each CPU. The idea is to show the ability of the CPU. Then they should ALSO contain a general-purpose benchmark written in C (or similar) which will show what most applications will actually get out of the CPU. For THAT portion, gcc would be a reasonable choice, but still not a good one. I don't know about today, but five years ago most commercial software for Solaris/Sparc was written using the SUNSpro compiler, not gcc, because the code it turned out was dramatically faster than that which gcc produced. On architectures on which gcc is weak, it doesn't make much sense to use gcc for a benchmark.

        • by BWS ( 104239 )
          Their PR Rating (back in the Pentium Days) is the biggest piece of BullShit there ever was..
          • Their PR Rating (back in the Pentium Days) is the biggest piece of BullShit there ever was..

            No, Cyrix's PR Rating was bullshit. AMD marked their CPUs in the normal way back then.
            • by BWS ( 104239 )
              you're wrong.. it was Cyris that started the PR Rating but AMD joined
              • you're wrong.. it was Cyris that started the PR Rating but AMD joined

                Dear obstinant fool: if that be the case, then where pray tell to I find the PR rating on this AMD K6-2 processor from 1998 I currently hold in my hand? There isn't one you obtuse buffoon! In the days of the Pentium-clone processors, only Cyrix used the "Pentium equivalent" Rating. AMD marked their CPUs based on the clock speed. The CPU I have here says: "AMD-K6-2/366AFR". Its clock speed is 366mHz. No PR rating. Just clock speed. Since you seem to be so sure that AMD used the PR rating, please produce for me the markings for just such a CPU. Surely you can find a simple string of characters like that, yes? ignoramus.
      • The source should be published, and compiled with a standard (GCC) compiler.

        The CPU vendor, or any other compiler vendor can compile the same code, and publish the benchmark as well.

        This then opens up the competitive market in compilers.

        What I'd like to see is the same code, compled with cross cpu options, IE what happens when the code optomizes for Intel but runs on AMD. How much does that penalize you, useing the wrong optimizations?

        As long as the compiler, and options used are disclosed, I don't see a problem.
    • Perhaps you should keep browsing Van's page then. He has an idea about making an open source benchmark using a standard compiler [vanshardware.com]
    • Re:should be open. (Score:3, Interesting)

      by Shadow99_1 ( 86250 )
      This would seem when Van's COMPREHENSIVE OPEN SOURCE BENCHMARK INITIATIVE (COSBI) [vanshardware.com] would be useful... You could always get in touch with Van about helping out the project...
  • Similar to the Quake/Quack debacle, I wonder if it would benchmark differently if the processor reported itself not as "Intel Pentium", but as "Incur Penalty", "Inept Penguin" or "Inane Penises". :)
  • AMD did the data-mining. Just like a poor fansite, Vans just posted the PR complete with their own pretty graphs made from the numbers AMD ran.
  • Nothing seems to surprise me anymore in this world.

    accounting tricks, monopolies, dirty tricks in business.

    I guess we should have seen this kinda thing coming a long time ago.

    In a way, anything imaginable in this line of business is probably also likely to happen.

    whats next?:
    The accused, Bill G., was caught red handed manipulating goverment studies concerning TCO and alternative OS's?

    Oh wait......

    welcome to the real world.
  • Notice that the Via C3 does phenomenally better than the 1.7GHz P4-Celeron. Now consider how utterly weak the floating point unit (FPU) of the Via C3 processor is, its use of SDRAM, its lower bandwidth memory bus than the Celeron. I won't call the P4-Celeron a performance processor, but VIA's C3 should *NOT* be able to beat it in ANY performance benchmark.

    What this says is that SySMark is really poorly coded, not "optimized" to favour Intel silicon. Incompetancy isn't evil. It's just...incompetant. This would explain why most serious benchmark runs seem to lack SysMark these days...

  • Partialy AMD's Fault (Score:2, Interesting)

    by mtthws ( 572660 )
    I hate to say this, I am a big AMD fan, but it is partialy AMD's fault that SysMark favors Intel. They have refused to work with the BABCo people in the past knowing the Intel people have. Is it any suprise that they end up favofing the company that works with them over the one that ignores them? AMD is now supposeidly working with them to make the next version do a more fair job testing their proccesors. So hopefully this will be a non issue in future realeses. It would prob be most fair if AMD and Intel would both let the benchmarking programs be written with out either of their interfiernce, but if one is going to get invloved, then they both realy need to.
    • by Ninja Programmer ( 145252 ) on Saturday August 24, 2002 @09:47AM (#4133013) Homepage
      BapCo's head quarters are on the Intel campus. Its been Intel biased from day 1 (back when AMD was making K5's and thinking about making K6's) and AMD has known this.

      The fact is, prior to the release of the Athlon, nearly all benchmarks were biased towards Intel. AMD's strategy when they released that Athlon was to make a CPU so good, it could beat Intel's CPUs even on these benchmarks. Sysmark just happens to be the one benchmark where Intel exercises so much control that it could literally say whatever Intel wanted it to say.

      What you are seeing is AMD just starting to switch strategies from "lets just beat them on every benchmark under the sun regardless of bias" to "lets expose the bias where it is as its worse so people can know the truth".

      This is all just preparation for the K8 launch I think. If AMD can properly put Sysmark results into perspective, maybe everything that is left will show what a monster K8 is versus any Intel offering. It is indicative that the K8 may not be winning on Sysmark on internal testing, or may not be winning by a sufficient margin.
    • I think this speaks volumes: Intel are schmoozing the benchmarkers while AMD are designing kick-ass processors. I hope the stockholders are listening!
      • Stockholders don't give a crap about technology- they care about money.

        That said, considering the new Intel 2.8 GHz P4 beats the new Athlon 2600 in a majority of the benchmarks, I would say that both companies are designing kick-ass processors. The difference is that Intel markets itself tons better.
  • by mustprotectdata ( 585131 ) on Saturday August 24, 2002 @09:40AM (#4132995)
    Coming from the Unix world, I'm used to comparing machines based on their SPECint and SPECfp performance...

    In general the SPEC people have done a better job being platform agnostic than some of the "miscellaneous" PC benchmarks.

    Current benchmarks for Intel http://www.spec.org/osg/cpu2000/results/res2002q2/ cpu2000-20020506-01357.html

    and AMD http://www.spec.org/osg/cpu2000/results/res2002q3/ cpu2000-20020701-01441.html

    Keep in mind that results for more recent AMD CPUs are not shown. If you compare the AMD 2200 with a 2.2G P4 you'll have 734 v's 784, which gives some credence to AMD's claimed rating.

    html4me!
    • That's actually AMD = 764 v's Intel = 784 so it's even closer than stated above, i.e. within 3%.

      Like anyone would be able to tell :-).

      And my poor little Sunblade 100 is only 174. No wonder Solaris seems slower than linux.

      html 3
  • by Cutriss ( 262920 ) on Saturday August 24, 2002 @09:49AM (#4133019) Homepage
    Here's Kyle's 4th Edition post from yesterday. Excerpts from Van's comments are in italics.

    VansHardware & AMD: There is a report on VansHardware this morning [vanshardware.com] that visits the differences between BAPCo's SysMark 2001 and SysMark 2002. The report's basic theme is that SysMark 2002 is skewed towards making the Intel Pentium 4 results look better than the AMD CPU results could have looked. It basically shows examples of things that were changed in SysMark 2002 that cherry pick areas in certain programs that the Pentium 4 excels at. While the article might seem to be work done by VansHardware there is something you need to know. All of the data shown in that article has been put together by AMD and not VansHardware. Take note of this one statement in the article.

    However, AMD has been able to "pick the lock" on SysMark to gain a much keener understanding into the internal workings of these tests.

    VansHardware is not the one with the "keener understanding", AMD is.

    The original PDF document from AMD is linked for download [vanshardware.com] so the fact that this data is not Van's is not exactly hidden either.

    Also their opening paragraphs state this.

    At this moment we will pause from the long march through our benchmark results to revisit the significant issues regarding BAPCo's SysMark 2002 brought up by AMD during our recent meeting with representatives from that chipmaker.

    We must state up front that despite the condemning information divulged to us, the AMD spokesmen repeatedly expressed support and guarded optimism for the reformation of BAPCo.


    The "significant issues" and "condemming information" shown were not harvested by VansHardware, actually all they do is interject a little bit of commentary.

    AMD has verified to me this morning that all of the graphed and tabled data shown on the VansHardware report is data that has been mined by AMD. Does this make the data inaccurate? Of course not, but I am sure that it hardly shows both sides of the story. AMD is not going to supply VansHardware with information that makes Intel look good. VansHardware represents to me, nothing more than an AMD fansite that takes shots at Intel every chance they get. I think they are far from what anyone could consider objective journalist and reporters. Them doing a cut and paste job with AMD's data goes to show that as true in my opinion. Websites get fed information all the time, trust us, we know. It is our jobs to go back and prove data and claims in our labs on our own time, not to repost corporate data, that can be considered far from objective. Independent sites in our hardware community should not be reposting PR spin in such a way as this. There is a fine line here but I think this is stepping across it.

    VansHardware does not exactly hide the fact that the data shown is not theirs but rather AMD's, but they certainly did not seem to represent that in an upfront manner so the reader sees the information for being exactly what it is...data released by the AMD PR machine.

    I am a huge AMD fan but I just don't like big companies being able to pump their corporate data into our community when it is not presented as such. I think AMD should have the balls to post information like this on their own website and not try and "slip it in" through a back door. In fact, I would consider the information to be much more credible if it were posted on AMD's own website as AMD research.

    I know Van has gotten upset here recently with his past employer removing his name from articles he has written. It seems to me that Van has done little to deserve his name being on this article and it should show authored by AMD.

    (ED NOTE - This is referring to some allegedly plagiarised articles that Tom's Hardware published after removing Van's name from them)

    Also worthy of mentioning is that AMD is now fully working with BAPCo, which they have not done in the past. AMD has had the ability to work with BAPCo for a long time now to make sure their products get represented properly and we are certainly happy to finally see AMD join the party to give the boat a more even keel.

    Lastly, another tidbit worth throwing into the mix is that Van Smith, owner of VansHardware, possibly either works for or is contracted to VIA as a CPU validation tester. We are working on a confirmation of this from VIA now. Do we need hardware websites that do work for the companies they end up reporting on? Just another thing to consider when objectivity is in question.
    • I'm surprised you got moderated as "overrated". What Kyle said is actually rather insightful (and something that I've believed about Van for a loooong time).

      Van is just a fanboy with a website. He doesn't do his own research, conduct any tests, or even say anything moderately interesting. It's like giving the pulpit to a slashdot poster during a congressional hearing on the constitutionality of the DMCA. "IT SUXORS!!! DOWN WIT DCMA!!!!111"
    • The article was clear at Vans Hardware, he wrote an article using AMDs information... Van Smith should of wrote the article with a little more distance from AMD, but that doesnt alter the facts from AMD.

      I didnt see that article over at HardOCP when I posted the news last night. But after reading HardOCP comments, You can see Kyle is really pissed off at Van Smith. Kyle even links to another site Real World Tech [realworldtech.com] where people are talking about Glad someone released the information... Could it be HardOCP is getting ready to release a major article, and Vans Hardware took the spotlight?

      There is a hint of back room dealings going on. I picked a new magazine "CPU" that has people from various places. Interesting to see what happens in the next year and major fansites... Heres a list of authors for "CPU" magazine. Rob "CmdrTaco" Malda, Anand Lal Shimpi, Kyle Bennett from Hard OCP, Joan Wood co-founder of Sharky Extreme, Alex "Sharky" Ross, Alex St. John (founder of directx at microsoft), Chris Pirillo (creator of LockerGnome/host on TechTV), Pete Loshin (former editor of BYTE Magazine, runs Internet-standard.com), Lisa Lopuck (Author of Web Design for Dummies).

  • When I dig through reviews on the latest CPU and/or mainboard, I initially groaned at the increasing number of benchmarks folks would put out. It is more than just increasing click-through rates (well maybe not for some, but...) - it lets me see applications that I use. Synthetic benchmarks and politician's promises garner then same level of trust from me.

    Anyhow, I game and code but use games to judge where my cash goes. When the P4 came out, I saw it did great job with Quake and I started to get excited about the CPU. Then I saw the benchmarks on the games I actually play - UT, CS, and a few others - and it was not black and white. After the ATI fiasco, Quake is up there with synthetic benchmarks IMHO. As for Photoshop, you can pick what platform you want to 'win' by tuning the filters. Apple does it, their dually box wipes out the competition, the other do it and the tables are turned.

    There are great graphs out there that show benchmarks using different sizes of data. Its like comparing a small turbo charged engine to a larger normally aspirated one - so what RPM were you at when you ran your test? BMW's M5 feels slower than an Audi S4 at the start, but get the RPM's up there and it is a different story. Even pickup trucks can beat a Ferrari if you tune the test to take advantage of a sweet spot.

    I've done my homework, and my personal cluster is mostly AMD today. Still have one celeron 566@800 as a CS server, but my workstation (Intel Xeon box) was replaced by AMD MP chips. Secondary boxes are all XP chips, but they use to be PII&III's when Citrix and the K5 sucked. They run Oracle, Weblogic, LDAP, and other stuff quite well when I'm working, and one swap of a hard drive later I'm getting some solid fragging in on the same box. In another year or so, if Intel really hold the crown , the price is right, and my boxes are 'only fast enough for web browsing and email', I'll chose them.
  • A couple of days after some Lawyers get together for a class-action suit alleging that Pentium IVs are slower than the AMD competition, new BAPCO tests 'prove' that the Pentium IV was quicker all along.
    Nice one Intel. At the very least, this should muddy the waters with respect to which one is quicker being a matter of opinion.
    I use a 7 Watt Via C3 as opposed to one of the 60 Watt P4/Athlone and do not really care either way.
  • ... How well do they do with Photoshop batch processing?

    I'm actually partly serious here, I think wider publication of more 'real-world' performance figures is in order. The people who frequent sites like Tom's Hardware and Anand are the only ones who really care about raw benchmark numbers. The rest of the world is more interested in getting their work done more quickly.

    • That's what SysMark is supposed to be: they measure "real-world" performance figures - they run a slew of Photoshop filters, and time it, and other crap.

      Unfortunately, SysMark's testing strategy is really terrible. I'm even a bit confused how it works: they say that they scale each test based on how long it takes to complete: but is the scaling from a "reference system" or from each system? If it's from a reference system, then it's biased against whatever that reference system is good at (since the difficult bits get weighted more). If it's from each system on the fly, then it's really meaningless, as one poorly-chosen benchmark can skew the whole thing.

      Worse yet: in SysMark 2002, AMD claims that BAPCo uses the same benchmark, multiple times: this is just plain bad, because not only does it magnify the importance of this benchmark, it shrinks the importance of all of the other ones. It's just plain idiotic. Take 3 tests, run them 4 times each, and use the results from all of the runs? It's a very very obvious bias - the only reason you would do that is if you wanted to cheat for one specific processor, and you knew which filters it was good at.
  • by DeadBugs ( 546475 ) on Saturday August 24, 2002 @10:26AM (#4133134) Homepage
    HardOCP [hardocp.com] notes that Vans got their info from AMD so it may be a bit biased. a quote from HardOCP:

    " AMD has verified to me this morning that all of the graphed and tabled data shown on the VansHardware report is data that has been mined by AMD"

    "AMD is not going to supply VansHardware with information that makes Intel look good. VansHardware represents to me, nothing more than an AMD fansite that takes shots at Intel every chance they get. I think they are far from what anyone could consider objective journalist and reporters."
    • If they are telling the truth, it doesn't matter how biased they might be. They could believe in black helicopters and Elvis sightings for all it would matter. The question is: is the information they put forth accurate? If so, then Intel is indeed yanking people's chains with benchmarks (as a Mac dude I can't repress a 'oh, THAT'S a surprise' reaction) and the bias is in how the site draws conclusions from this, and how loudly they remind people of stuff like the class action suit over misleading performance claims for the P4.

      Which, surprise surprise, they do indeed remind people of! And if this is true, they'd be right that it was a smoking gun w.r.t. that lawsuit, too.

      Let them go on being an AMD fanboy site. I don't see INTEL fanboy sites breaking this story.

  • However, AMD has been able to "pick the lock" on SysMark to gain a much keener understanding into the internal workings of these tests.

    Isn't that a violation of the DMCA?
  • Besides SPEC Benchmarks I don't trust any other ones.

    They need a SPECquake.
  • All benchmarks favor whoever requested they be written. So its a crap shoot, choose the one you want to believe in, then go do you own tests, Your own testing is the only tests that matter.
  • I've read a lot of architecture bashing. "P4's 8K L1 is too small to be useful." "AMD's huge L2 is too slow to be useful."

    Both of these companies spend *billions* of dollars on producing these processors. Both companies run lots of simulations to determine what design choices best fit with the rest of their design. When you're spending that much time and money developing these CPUs, you can't afford NOT to consider every option.

    When it comes down to it, both AMD and Intel have really good engineers, and both companies listen to them when figuring out how to build cpus.

    So consider that the P4's 8KB L1 trace cache is so small because that's as big as they could make it while keeping the latency down to 2 cycles-- something that was critical to keeping their double pumped ALUs busy (and thus their IPC up as much as they can)-- and that they could compensate by working a bit harder on a fast L2.

    Perhaps AMD decided that they could live with an extr cycle of latency in the L1 because they have enough instrucions in-flight that blocking on a cache access wouldn't hurt them as much as a low hit rate would.

    Or, perhaps there are multiple sweet spots in size/hitrate, especially when you factor in die size and cost. Honestly, I don't know the reasons why they made these decisions, and I'd love to find out why-- but I have 100% confidence that all the options were carefully considered.

    When it comes down to it, both architectures are performing really well! And for YEARS, they have been competitive with each other. So while you may have your favorite (I, for example, think the P4 SMT and trace cache stuff is pretty neat), you've got to realize that zealously promoting one over another just makes you look silly.

    Cheers!

    -Ed

  • I never have and I never will trust benchmarks. Just like I will never trust any specs that a company puts out untill I try the machine for myself. It's one of the reasons I use Apple computers. As I type this, I'm on a custom built PC, with an AMD Athlon XP 2000 processor. It has 512 MB of memory, an Abit KR7A-133 mo-bo and plenty of other bells and wistles. Now theoreticaly speaking, this computer should be a hell of a lot faster than my 300Mhz iBook. Yet for everything except playing UT, thsi computer seems either equivilent to or sometimes (especialy right after startup) slower than the iBook. My point is, don't trust the numbers, use it yourself and decide for yourself. You're the one using the computer, and 90% o fusing a computer is how it feels to you.
  • I've been saying that as software gets optimized for the P4, it'll start seeming to be the faster processor - Look at Lighwave 7, 3dsmax 4.2sp1, and now this. :)

    if(cputype==AMD) {
    sleep(1);
    }
  • I bought 3 athlonXP motherboards and an athlonXP1800 last spring. I threw out every single motherboard and the cpu is in my closet. I have downgraded back to my pentiumIII. Why?

    I had major stability problems and the athlon burns too hot. First I relized my old powersupply did not have the power to even boot these babies(300 w), I first had to buy a new powersupply, then new ram, then a new case. After this my first motherboard would boot and then crash after it was done posting. I returned it for an abit which was extremely buggy, and my netgear nics wouldn't work, it blue screened multiple times whenever I used my geforce(well known bug), and the apic controller did weird things in linux and sometimes would not shut down properly. It then died totally 2 weeks later. It was exchanged for an msi board that supprisingly worked as expected, well at first. The system would not shutdown properly under linux but I didn't care that much. I then noticed the cpu kept reaching above 45c which could damage the chip over time. A third problem was whenever I enabled the nvidia opengl patch the system would crash quite often. The problem went away if I disabled it. I built the drivers from scratch so it was the right version for my kernel compilied from source. I then spent $60 for the top of the line cpu cooler. I had to use some force to get it to close on the cpu and my screwdriver flew off the cooler and damaged a chip on the board!

    The guys at the computer shop said they would only exchange my msi for another exact ones and they were becoming agravaited at me for obvious reasons. I was so angry I just said f*ck this sh*t and didn't bother to replace my motherboard. Even if I get the board replaced, I am faced with the same problems and bugs! I admit I broke the last motherboard and it was totally my fault and not AMD's but after a month and $750 dollars later, I did not care. DDR ram only worked with athlons at the time and I now had two cases and powersupplies which I did not need. I felt like a sucker who just flused $750 dollars down the toilet when I downgraded back. Why should I have to put up with that crap? Anyway the first board was probably defective and the second should of not even left the manufactoring plant. I did look up my bugs online and there were many pissed off consumers who had the same problems with the same exact sets of hardware, so the abit one is a piece of sh*t. Many early chipsets for the athlon processors are buggy. Especially VIA's. There is even a well known linux/nvidia/amd bug that can crash your system if you do any opengl which plauged my msi system.

    Are there stable bugfree wonderfull athlon boards out there? Yes I am sure there are. I am not trying to start a flamewar here but rather just share my experience with them. I am thinking about finally getting rid of my pentiumIII. This time with a guinine intel processor.

    Is it the fastest or cheapset? No. Do I care? no. I want something close to the fastest that will work with my case, work with my power supply, work with all of my perihperals, and be reliable. All the benchmarking websites do is show how fast the chip is. I want to know how reliable it is. I can find some vendor references to overall reliablity but the boards vary from chipset to chipset. If I put down big bucks to upgrade my system it better well work and keep working for a long time! There is a similiar arguement about buying an expensive sun box over a cheap lintel one. You get what you pay for. This is why many bussinesses choose intel overwhemingly over AMD.

    Most of the reliablity problems which plagued the first generations of athlon processors are gone and I admit the first intel 810 chipsets were terrible but there are less bugs with intel chipsets overall. I am willing to spend $250 more this time and feel at ease and look forward to use it for a long time.

  • Which was the compiler company that wrote into it's compiler the ability to recognize a common benchmark that didn't require output, and just converted it to NOOPs? Wow, did that computer ever chomp on those NOOPs fast...

    Benchmarks measure speed on benchmark code. It's like horsepower in a car - a car with 300 horsepower isn't necessarily faster than one with 280, or even 200. It just depends, man.
  • Yawn (Score:2, Flamebait)

    by Sebastopol ( 189276 )
    Is Intel forcing anyone to use BapCo benchmarks? Aren't there dozens of websites out there with their own? Just look at any chip review on Tom's Hardware or Anandtech, Sysmark is only one of many benchmarks used.

    Is this a surprise? Are people actually "outraged"? Please. Are you trying to tell me AMD has never hand-picked their own benchmarks?

    Any CPU manufacturer will pick the benchmark that makes them look best but try to play it down, and then use that score extensively as propaganda.

    Just look at when Apple used Bytemark to claim a 400% performance boost over intel chips -- of course, they used the 486-compiled version on a P6 core, but they didn't let that little nugget of disniformation get in their way.

    Intel, AMD, Apple -- all whores.

  • There have been some more developments around Van's Smith's review of the Bapco benchmark's. Tom Pabt's, the owner of tomshardware.com, has written an editorial condemning the journalistic integrity of Smith, and Kyle Bennett of hardocp.com.

    Here are part 1 [tomshardware.com], part 2 [tomshardware.com], part 3 [tomshardware.com], part 4 [tomshardware.com] and part 5 [tomshardware.com]. Pabst's accusation is that Smith and Bennett have both written articles where they claimed to have discovered flaws in the benchmarks that make one manufacturer's product look good, when they were really being coached by the that manufacturer's rivals.

    Here is Smith's rebuttal [vanshardware.com].

    Van Smith used to work for Tom Pabst. In my opinion the quality and utility of tomshardware.com has gone down since Van Smith departed.

    And, about this fight, I would say that Dr Pabst (he is an MD) hasn't learned the value of civility. In my opinion, in a fight like this one, people can't really follow the details, so they base their assessment of who is right, by looking to see who remains more civil.

"In my opinion, Richard Stallman wouldn't recognise terrorism if it came up and bit him on his Internet." -- Ross M. Greenberg

Working...