Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
AMD

First Benchmarks of AMD Hammer Prototype 497

porciletto writes "As seen on Ace's Hardware, this article features Quake 3 benchmarks comparing an 800 MHz ClawHammer sample to Athlon MPs at 800 MHz and 1667 MHz, as well as a Willamette Pentium 4 (256 KB L2, 400 MHz FSB) at 800 MHz and 1600 MHz. The benchmark results indicate a 40% performance increase over an Athlon MP for the ClawHammer. Additionally, the 800 MHz ClawHammer manages to tie (actually outperform by 1 FPS) the 1667 MHz Willamette Pentium 4."
This discussion has been archived. No new comments can be posted.

First Benchmarks of AMD Hammer Prototype

Comments Filter:
  • by xiox ( 66483 ) on Friday June 07, 2002 @08:36AM (#3658770) Homepage
    They tested some software which had been compiled for 64 bit mode. With the large number of 64 bit registers the hammer has there should be some significant speed improvement.
  • by Anonymous Coward on Friday June 07, 2002 @08:39AM (#3658790)
    I've been saying this for many months, and I'll say it again: By far the biggest problem on Intel's horizon is the AMD Hammer series of chips. In the IA64, Intel decided to make a clean break and go to a new architecture, incurring performance hit when running IA32 code. AMD instead blew out the IA32 architecture to 64 bits.

    Expect a massive FUD attack from Intel in the coming months as they try to convince the world that their chips aren't really inferior to those from AMD.

    • by jtshaw ( 398319 ) on Friday June 07, 2002 @08:58AM (#3658867) Homepage
      Not to take the side of Intel... but as an Electrical Engineer with a good amount of interest in microprocessor design I have to say I like Intels move away from x86. X86 is definitly not even close to the best computer architecture out there.

      It does make most sense for AMD to spend there time building a 64-bit x86 processor then it does a completely new architecture atm. But that doesn't mean we wouldn't all benifit greatly from dropping x86. Of course this can't be an overnight change, but it does need to happen.

      Eventually you have to break backwards compatibility to move forward without making things ugly. x86 is old, it is overly complex, it is inefficient in many respects, it is time to say good bye. There is a reason the original designers only expected it to be a 3-5 year temporary solution.
      • by dpilot ( 134227 ) on Friday June 07, 2002 @09:24AM (#3658970) Homepage Journal
        Not to take either side...

        But if Intel was going to supercede a messy architecture like X86, I wish they'd done something better than IA64. While the jury is still out on the merits of IA64, it has some of the marks of Internal Politics on it. It sounds like a VLIW camp inside Intel sold some management on a renamed version of the basic approach, and the project gathered Corporate Inertia.

        At the same time, it doesn't sound as if all of the VLIW problems have been solved on the compiler side, so it's not clear that IA64 is doing any more than a clean, modern architecture cable of OOO execution could have done.

        Out of the Hammer series, I'm reminded/hoping for the phenomenon described in "Soul of a New Machine", where they managed to clean and extend the old architecture at the same time. By the time they were done, the old architecture was an ugly wart on the side of a new clean one. The fear was the new being an uglier wart on the side of an already ugly one, and they avoided it.

        I don't know enough about Hammer to know what the case is. I have the documents, but haven't made time to read them. I've also heard some rumblings that some of the performance improvements to IA64 involve de-purifying it's VLIW to pick up OOO techniques. I've heard that VLIW was an attempt to sidestep OOO because those prolems were feared, but in the meantime the industry has learned how to do OOO pretty well.
        • It sounds like a VLIW camp inside Intel

          *sigh*

          I wish we'd just start calling these data types what they are - int16, int32, int64, float64, etc. It could save us all so much confusion. I mean, what are they going to call it when chips move to 512-bit? Uber Turbo Fantastically-Amazing Super Very Long Instruction Word? :/
        • AMD's 64-bit "long mode" IS cleaner than ia32 protected mode:
          -- No segmentation
          -- Can address bottom 8 bits of every GP register (in other words, GP registers are truly general purpose now)
          -- Some stupid instructions removed (e.g., BCD ops)
          -- Recommend using SSE2 instead of the x87 horror

          In addition you get the nice extensions of long mode:
          -- 16 GP registers
          -- 16 SSE2 registers
          -- 64-bit ALU and memory ops
          -- IP-relative addressing mode

          If you look at long mode and ask, "what's really horrible about this?" I would only say instruction encoding and a large number of remaining wacko instructions. But together these give the x86 a performance advantage it has always had over other designs --- small code size and therefore better memory system performance for the instruction stream.
      • by BitMan ( 15055 ) on Friday June 07, 2002 @11:51AM (#3659943)

        As a fellow ECE, I'll give Intel a mark in the "innovative" column on IA-64. But the concepts of predication, EPIC and compiler-time optimizations we're NOT good enough to even make the new architecture competitive when not considering x86 compatibility. And Intel needs to be smacked for all those stupid extensions -- it's funny to see AMD accomodating them with less effort than Intel.

        Alpha has always been the "64-bit RISC of RISCs" and they had binary translation techology c/o FX!32 so Linux/x86, NT/x86 and VMS/VAX apps could run on Linux/Alpha, NT/Alpha and VMS/Alpha, respectively. It was not only original, but using binary translation on the same OS, but different architecture, works far better for compatibility in software than general (any OS) architectural compatibility in hardware/microcode! With Alpha 364 at 0.13um would be kicking IA-64 butt. I mean, 3-year old Alpha 264 0.25um processors beat IA-64 at the same clock speeds!

        Anyhoo, as a fellow EE/ECE, please read this post I made a few weeks ago and let me know what you think. It is entitled "How AMD and its partners are putting x86 back on the right track ... " [matrixlist.com]. IA-64 was an ideal and novel concept, one that is not so good based in reality where good branch prediction is better than predication, and run-time optimization is just as important as compile-time. The Alpha 364 team predicted the "problems" with IA-64, which came true.

      • Eventually you have to break backwards compatibility to move forward without making things ugly. x86 is old, it is overly complex, it is inefficient in many respects, it is time to say good bye.

        Everybody has been saying that for twenty years.

        Twenty years. It is far too late for x86ers to worry about "making things ugly". That sacrifice was made in the early 80s. And it paid off.

        The reason Intel is still in business, is because they knew what drove the market. Superior (in performance, power use, and just plain elegance) alternatives were around all along, but x86 still got all the sales. The reason for this is that the strongest market force is the need for good compatability with The Legacy. Against this force, all other considerations are irrelevant.

        That's why Intel survived (flourished) in the 90s, and why AMD is about to kick their ass. AMD's embarrassing toadying to this principle in the Hammer design, shows that they understand. Intel attempt to raise the bar, shows that they have forgotten. Intel's chip is going to be the next 68k or PPC or SPARC. It'll be a niche, where everyone says how neato it is, and yet few actually use it. And in the mean time, AMD will be selling gazillions of Hammers.

    • by Jay Carlson ( 28733 ) on Friday June 07, 2002 @09:26AM (#3658990)
      Yeah. But IA64 made a lot of sense for Intel, given their market position when the effort started.

      Think back to Rambus. (Back?) Intel got a lot of options on Rambus stock, provided that Intel could ship n percent of systems using Rambus memory. If Intel had no significant chipset competition, this would be easy. But it turned out there was enough competition to give people a choice of chipsets, and hence memory technologies.

      Still, the P4 seems consciously designed to play to Rambus strengths. It chews memory bandwidth like candy through prefetching, which helps cover the higher Rambus latencies. I think Intel took a performance hit relative to AMD when the market preferred DDR SDRAM.

      Anyway, it's a great story for Intel if they could control the future of PC technology. Rambus gets rich, Intel gets rich, you pay more. Three cheers for AMD for breaking this.

      IA64 now looks similiar. If it wasn't for the aura of inevitability associated with the Itanic, nobody would be particularly thrilled with it. The initial SPECint numbers where it barely kept up with a SPARC were the first practical warning---if you don't count the schedule slips.

      If IA64 was inevitable, everybody would have to pay up to transition to it. If it was the banner Win64 platform, a lot of places would be buying them regardless of relative price/performance. But because it looks like AMD will eat IA64 from the low end, and with POWER4 staring down from the high end, there's no longer an obvious niche where IA64 dominance is inevitable.

      Four cheers for AMD.

    • In the IA64, Intel decided to make a clean break and go to a new architecture [...] AMD instead blew out the IA32 architecture to 64 bits.

      Right, and what's interesting is that from a pure geek perspective, Intel did the right thing - AMD did not.

      People have been griping about CISC and Intel's grotesque manifestations of x86 for years now. So they finally do the right thing and sit down with HP to spend a couple years hammering out a brand new design. And what do they get from the geeks? Nothing but boohs and hisses. You guys should be ashamed of yourselves. Did you really want a Pentium V, VI, etc.?

      I'm glad Intel finally quit x86 cold turkey. AMD may have bought themselves a little time with the Opteron, but the sooner we're all off x86 the better.

      Oh, and don't think that IA64 won't be looking MUCH better once we start seeing properly optimized software and later iterations of it. Intel is just like Microsoft, the first implementations invariably suck, but they always get better from there.
      • They haven't quit the x86 cold turkey. That's part of the problem, it can STILL run unmodified binaries built for an 8086. IA64 has x86 grafted onto it. That's a big reason why its been delayed for so long and why the performance sucks. There is a lot of hardware on the chip to convert x86 instructions to IA64 instructions. Time that could of been spent making the rest of the chip better, has been spent verifying x86 conversion circuitry. Intel will never drop IA32. They learned that with the iAPX-432, i860, and the 8080. No one wants a chip unless its x86.
  • by warmcat ( 3545 ) on Friday June 07, 2002 @08:42AM (#3658799)
    Using Q3, accellerated by the graphics hardware, to benchmark CPU performance. Result, within 1FPS for clockspeed/2.

    Are you testing to see whether I am an android or a lesbian, Mr Deckard?
  • I can't think of a good reason to justify a upgrade to 64bit. Its killing me, not to have a reson to get one... or four.
    • I really doubt the home user would ever need anything more than programs that use 36-bit (color, data size, etc). Even Doom III wouldn't need 64-bit power. So why don't we just add 4 bits to the current chips and be done with it?
      • by Cyno ( 85911 )
        Yeah, and I doubt the home user would ever need anything more than 640K. The facts are I'm a home user. I have files I'd like to use larger than 4GB. I need 64-bit.
  • Yeah, but (Score:2, Funny)

    by Anonymous Coward
    I don't take any notice till i see the notepad.exe benchmark.
  • by blankmange ( 571591 ) on Friday June 07, 2002 @08:45AM (#3658809)
    I have purchasing and building computers for over 10 years now and have yet to use an Intel cpu -- and have not missed out on anything. I cannot forsee abandoning the AMD platform anytime soon either -- bring on the Hammer (or Opteron or whatever they are calling it this month...).

    ps -- where is the obligatory Beowulf cluster commentary on this??? I am shocked and appalled at this apparent oversight by my fellow /.'ers...

  • by tomstdenis ( 446163 ) <tomstdenis@NOsPAM.gmail.com> on Friday June 07, 2002 @08:47AM (#3658817) Homepage
    Why is Quake the benchmark of a good processor? Maybe computers can do something other than cache intense graphics?

    Gah.

    Tom
    • Why is Quake the benchmark of a good processor?

      Because Quake is what will be used by people who believe 'reviews' and 'benchmarks' from sites like aceshardware.
      • Right. Because the guys at Ace's only know about, and use, Quake benchmarking. Oh yes. If you were to show them, for instance, some SPECmarks, they wouldn't understand anything. So, I wonder who built this [aceshardware.com] and stuck it on their site?! What a hack! Perhaps these benchmarks, which do not originate with Ace's are from Quake because that was what was available to run? It's not as if the Hammer is out in all that many reviewer's hands, yet...
      • You're probably thinking of [someoneelse]'s Hardware (cough*tom*cough)...

        To judge real-world performance, Quake is at least as good as any synthetic benchmark. Personally, I'd like to see benchmarks for 3DS MAX, TMPGEnc or Photoshop (because those are some of the programs I use daily). But between Quake and WhateverMark2002, I prefer Quake (and I don't even play Quake).

        RMN
        ~~~
    • Why is Quake the benchmark of a good processor? Maybe computers can do something other than cache intense graphics?

      You are right, and in fact Quake is not even a good benchmark for gaming in general. However, it is very memory intensive and was generally the P4's strong point.

      Saying that the Opteron will smoke a P4 at Quake is saying that it smokes the P4 at its own game.

      The test is a good indicator that if ...if... AMD can deliver at somewhere near the promised clockspeeds, Intel is going to have to ramp the P4 very high to compete.
    • It has a standardized timedemo, is more CPU-intensive than some of the newer games, has been around forever, and outputs easy to understand, real results. Benchmarking-only programs, like 3dmark 2001, output more abstract numbers. Games, being the only reason many people upgrade their hardware, are the only programs that are used by many to tax their computers to the limits. Although, starting up Mozilla taxes my hardware to the limits rather nicely. :)
  • From www.aceshardware.com [aceshardware.com]
    ", however, it's important to keep in mind that these are unauthorized tests of an early revision CPU and platform, and that there could be significant differences in performance from final shipping versions depending upon the state of the test hardware used here."


    Get false results by having a chip that isn't the same one that is going to be released and Intel will overcompensate by dropping prices (again) and coming out with a "better" chip.

    • This is true, but much better than Intel releasing "estimated" results from their Itanium 2 that are based upon sheer hype/vapor.

      Speaking of Itanium 2, my initial question still remains: "What the hell happened to Itanium?" I still haven't seen this chip anywhere yet....

  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Friday June 07, 2002 @08:58AM (#3658865) Journal
    If you manage to get through the slashdotting, the story in the tecchannel web pages is amazing. The prototype Clawhammer, while limited to 800 MHz, performed shockingly well on the few, but varied, benchmarks they subjected it to. It's interesting that both Intel and AMD teach the same lesson, that MHz doesn't determine performance. Unfortunately for Intel, they demonstrate it by the P4 not running as fast as the MHz would imply, where the AMD chips run far faster than MHz would imply.

    I can't wait for these chips to get out there.

    thad
    • I think the release of the Clawhammer shows the great divide between Intel and AMD's philosophies widening. Mind you, Intel's strategy isn't entirely bad, although it seems highly inefficient at first glance. Intel will happily fire back when the Clawhammer is released. What will they do? Quickly ramp up the clock speed towards 3.4-4GHz. I wouldn't be surprised if they also enable hyperthreading on "consumer" P4s. And, they'll increase the memory bandwidth of the P4 platform by releasing dual-channel DDR chipsets. As for AMD, this looks like one great chip. If AMD plays its cards right, I think it would REALLY make a splash in the server/enterprise market. Whereas, Intel can stay neck and neck with AMD on the consumer end, we've seen how great AMD's SMP platform is. Imagine a 4-way AMD hammer computer:-)
  • Yes, but (Score:5, Funny)

    by Xcrap ( 583883 ) <linux@sTIGERtarmail.co.za minus cat> on Friday June 07, 2002 @09:04AM (#3658894) Homepage Journal
    Can that hammer smash a block of itanium without breaking?
  • by nahtanoj ( 96808 ) on Friday June 07, 2002 @09:20AM (#3658953)

    As has been said, Quake is only relevant to the chips concerned in that it only tests the 32-bit compatability of the Opteron. I would have like to see some tests that demonstrated the advantage of 64-bit processors over 32-bit processors. Granted, the reviewers only wanted to show benchmearks that the populous was familar with and they were pressed for time. Let's give them a break for that.

    Nahtanoj

  • 1667? (Score:2, Interesting)

    by InnereNacht ( 529021 )
    No, it outperforms the 1600 by 1FPS. Still quite the feat. If this thing releases at even 1200mhz, you're looking at something comparable to a P4 2.4ghz. The site does say they stated they were aiming for 1.6ghz (nice!), but we'll see if that actually happens.

    It's nice to see that the industry isn't playing too much of the "more is faster" game, at least as much as they used to. When an 800mhz part is comparable to a 1600mhz, you've got to wonder what intel isn't doing to optimize.
    • Re:1667? (Score:4, Interesting)

      by Jugalator ( 259273 ) on Friday June 07, 2002 @09:36AM (#3659041) Journal
      you've got to wonder what intel isn't doing to optimize.

      FYI, we had a teacher in a processor architecture course that worked with optimizing algorithms and had worked for Intel. He left and started working for AMD instead. He openly said that Intel sucked. Guess what PR that gives when it's from the mouth of an insightful teacher. :)

      So they must do something wrong over there. :) At least in the eyes of some optimizing guys. heh
      • Re:1667? (Score:3, Insightful)

        by Jugalator ( 259273 )
        Forgot one thing... He elaborated a bit about P4's and said that "Intel has an interesting super long pipeline in the P4's - it's gonna be interesting to see what clock speeds it requires to fill so it can be of use to 100%". :)

        I guess we have an explanation of the diff in AMD/Intel clock frequencies right there...
    • by jani ( 4530 )
      When an 800mhz part is comparable to a 1600mhz


      But is it, really? We don't know yet. These were, as others have explained, unauthorized tests made on a pre-pre-release system.

      That does not mean that the release system, running at 1600 MHz is faster than a 3200 MHz Pentium 4.

      It might just as well mean that it is slower, and 3 GHz-ish is what the Pentium 4 will be at around release time for Opteron.

      It's still going to be a race.

      On a more historical (heh) note, there have been processors before running at lower clock frequencies outperforming others at double or higher frequency. MIPS R10K and descendants, for instance.

      Also, it is completely out in the blue whether Opteron will be any good in multi-CPU configurations, compared to offers from Intel and other chip makers.
    • It's nice to see that the industry isn't playing too much of the "more is faster" game, at least as much as they used to. When an 800mhz part is comparable to a 1600mhz, you've got to wonder what intel isn't doing to optimize.

      AFAIK, the FPU of the P4 is crap (not that the IA32 FPU stack isn't crap in general). Intel's strategy is to get software designers to switch calculations that would normally involve the FPU over to their SSE2 instruction set. This really does improve P4 performance considerably.

      This is happening, but adoption is slow. Also, the Opteron core has SSE2 support.

      I would have liked to see an SSE2 heavy benchmark run on both machines.
    • Doing a bit of maths.

      If Hammer= 1.4 x Athlon
      & Athlon = 1.2 x P4 (non Northwood)

      then if the word of hammer starting at 3400+ and 4000+ is true it would result in Hammer having a 2.0Ghz to 2.4Ghz clock (assuming AMD do nothing regards 64bit being faster)

      Given the small size of the die, this sounds quite feasible, indeed 5000+ sounds possible with 0.13 and SOI

      OK, ok, so its very iffy to extrapolate like that, but Intel may well have some significant problems if AMD can roll out Hammer fast enough to beat the P4 0.09 die strink.

  • by zardie ( 111478 ) on Friday June 07, 2002 @09:40AM (#3659065) Homepage
    It's just a sample. AMD released the Clawhammer processor to manufacturers for demonistrations and testing, so they can develop the platform, so that, get this, benchmark results would not be released. Let's face it - who in their RIGHT MIND would benchmark an 800MHz CPU against the latest and greatest processors?

    Obviously, these guys did. AMD will NOT be happy about this.

    Also remember that the Opteron will be running at MUCH higher clock speeds upon release. I'd guess above the 2GHz range for sure, but AMD doesn't want anybody to know that. This also suggests that this lil' 800MHz sample could be very overclockable.

    This is AMD's weapon that can really take a LOT of market share. Microsoft already have a Windows XP build ported to the Opteron/x86-64 platform. The Opteron runs cooler, as well.

    One thing that disappoints me - I have not seen ONE PCI64 slot on any of these test boards!! I hope that this'll be worked out before release.
  • by Anonymous Coward on Friday June 07, 2002 @10:29AM (#3659377)
    I'll start this by saying YES, I work for Intel. Hate me...whatever.

    But its SOOOOO disheartening to see my fellow nerds and /.ers so ignorant on something like the computer scene. I'm talking about all the AMD LOVE and Intel hatred posts that always follows a news article about CPU's.

    I can understand the love for Linux. A group of people programming for free, fighting a giant like Microsoft. But why should AMD garner the same sort of love and respect? AMD is a giant corporation itself, willing to screw you over. They'd charge you $2000 per processor if Intel wasn't around (and yes Intel would do the same).

    Last week Intel dropped the prices of its processors. AMD was forced to follow suit, dropping their prices about 2 days later. Did the Slashdot community cheer Intel?

    So along comes this news...AMD Opteron 800 MHz beats a Pentium 4 1.6 GHz by one frame pre second. I guess I fail to see why everyone is so excited?

    I'll wager ANYTHING that when it ships, a 800MHz Opteron will sell for at LEAST twice the price of a Pentium 4 1.6Ghz.

    Why do I even bother.
    • Try looking at it one of these ways: (A) This isn't news, it's entertainment. Everyone's cheering for the underdog AMD, partly 'cause it's easier for little ol' us to identity with than giant Intel and partly because the fun will end if Intel kills AMD. (B) We don't want to pay either of 'em $2000, so we shift our support to whoever's the underdog in order to prolong the price/performance war.
    • Inspite of the above post appearing to be a raving troll, I will respond with:

      Lets see..... why do we like AMD

      1 Dollar for dollar they kick Intel's hiney for performance. IE you get more bang for your buck.

      2 A 800 mhz AMD is as fast as a 1600 mhz Intel. That is just plain cool.... It has geek cool factor all over it.

      3 As to the last statement about an Opteron 800 being twice what a P4 1600 .... some how I doubt it. AMD has consistently underpriced Intel for the same level of performance. If it is more expensive it will only be at initial release and soon will be cheaper than a comparable Intel.

      4 As geeks we get tired of a market dominated by inferior products... IE Windows is the dominat operating system, and Intel is the dominant chip. Sometimes we just like to root for the underdog. If AMD can beat Intel at their own game, more the power to them.
    • why i cheer amd (Score:5, Interesting)

      by Indy1 ( 99447 ) <spamtrap@fuckedregime.com> on Friday June 07, 2002 @02:50PM (#3661160) Homepage
      Quick background:


      I am a long time system designer /upgrader / hardware IT geek. I've been working on Amd /Intel boxes since the 386 days. One reason why I cheer for Amd is that in the past few years, Intel seems bent on dragging all of us back into the 286 days of hardware being propeirty. Slot 1, Rdram memory interfaces, etc. Amd seems to have more of a commitment to sticking to industry standards, like (at the time) socket 7, sdram, ddr, etc.


      Another reason why i tend to prefer Amd is the cynical marketing processor known as the P-4. The vast majority of benchmarks show that unless your running software thats heavily SSE-2 optimized, the Athlon's spank the P-4. Yet the P-4's are much more $$$$ due to all those wonderful Intel commercials with dancing morons in bunny suits, or some smucks painted up like a martian with a bad head cold. Instend of wasting all that money marketing, use it to improve your designs! Amd spends virtually nothing on marketing, and yet whenever they have a good design, their products sell extremely well. And dont get me started on intel's late ddr support, or the earily 845 chipsets that were sdram only, which had PATHETIC performace.



      I guess the point of my whole rant is......I use Intel or Amd, or whoever, as long as they give me a good value for my (or my customer's) dollar. Give me a nice industry standard design. Dont foist some new marketing propierty design on me. If its gotta be propierty, it better be for one of two reasons: Considerably cheaper, or considerably faster. Intel in the past few years has NOT focused on giving the customer value. Amd has. Give me a 1000 dollars, and I can build either an Intel box, or an Amd box thats 20% faster then the Intel box, and just as stable. (I dont buy the Amd isnt stable arguement, it all comes down to knowing your hardware and how to configure it properly for stable operation.)


      When Intel returns to delivering a product that is worth the price Intel charges for it, I'll use Intel again. Until then, I'll continue to laugh at ridiculous marketing schemes and do my research on which product is the fastest for the least money.

    • Why the love? Perhaps because AMD has brought something to the table that was long lacking in the hardware scene: competition for Intel.

      Seems to me that Intel used to spend months, even YEARS between significant speed increases of their processors. How long to go from a 486/33Mhz to a DX2/50? How long from the 486 to the Pentium? The Pentium Pro? Before AMD was on the scene Intel would milk every processor for a long, long time. People would pay through the nose for Intel chips. Intel's profit margins were grossly higher than anyone else's in the industry.

      Now comes AMD, bringing similar (sometimes GREATER) performance than Intel chips at a FRACTION of Intel's price. A quick check of Pricewatch shows an Athlon 2100+ going for $177, while Intel's 2.2Ghz P4 (the likeliest competitor) is going for $238. The situation was even more out of wack last week until Intel lowered pricing. Do you think for one minute Intel lowered prices out of the goodness of their hearts? Of course they didn't. They did it because Athlons had been grossly undercutting them in price and performing every bit as well as Intel's finest.

      Your predictions on the pricing of the Opteron are not valid as there will BE no 800Mhz Opteron. The Opteron is most likely going to debut around 1.5Ghz, give or take a couple of hundred Mhz. It will most likely cost twice what a 1.6Ghz P4 is costing right now, but that'll be just fine as it will most likely OUTPERFORM that 1.6Ghz P4 by about two to one. Things will be much closer with the Northwood B chips, but no matter what, AMD will almost certainly undercut Intel in pricing while delivering the same (within 10%) performance.

      Face it: Intel is used to high margins and is unwilling to cut their pricing far enough to put AMD in the coffin. They are running on brand name and little else right now. If the situations were reversed and AMD had the household name and Intel was the relative unknown, does anyone for one moment think that anyone in their right mind would pay the lofty prices Intel is commanding right now? Of course not.
  • more regulation!!! (Score:2, Informative)

    by meis31337 ( 574142 )
    A fake newspaper reports:

    Senator F. Bar R-51st state announced the drafting of a new technology bill. It requires that all CPU chips conform to a regulated speed quantifier. This will allow all chips to be able to be easily compared with one another to end industry confusion. The unit, abbreviated IHz (Intel Hertz), was developed by the Intel Corporation. They have lobbied to get this standard, which will be controled and policed by a board of independent persons funded by Intel, adoped into Federal law....

    ughh..
  • by dutky ( 20510 ) on Friday June 07, 2002 @11:16AM (#3659704) Homepage Journal
    <YAWN> wake me up when someone does a usefull benchmark on these systems. I don't trust proprietary micro-benchmarks and I have no use for Quake III fps numbers. I'd prefer a SPECint/fp score set, but will settle for kernel/gcc/ddd compile times and a stream [virginia.edu] run. (I don't do enough FP work to propose a poor-man's substitute for SPECfp and the entire question of DB/transaction benchmarking is a tougher nut than I'm willing to crack).

    Still, I'm eagerly awaiting the ClawHammer release. Every x86 box I've built for the last 5 years has been pure AMD, and I've been quite happy with them.

  • i just find it weird for the community to really compare the new hammer with the p4 product line of intel. if the main reason behind the hammer is to directly compete in the server line, then it should be the hammer vs itanium2 vs sparc vs pa-risc vs alpha vs powerpc. if you are going to compare it with p4, a professional will not even take you seriously.

    why use a some low form benchmark. although i understand that the current systems are in prototype, the benchmark should reflect something of the server world including but not limited to tpc, spec, etc. i would really love seeing the performance of hammer in a oracle/sql/db2 or other database benchmark. i would love seeing the hammer handling ssl transactions and others.

    with regards to amd using x86 with compatibility to 32bit, would it be dumb if you would run some non native applications? this means that amd anticipates that companies will not optimize their software to run on pure 64bit platform. this may be an indication that the initial design is not intended for the server product line. running 64bit does not make you compete in the server arena!!!!! the server market is a very different ball game compared to the consumer - cpu is not the prime reason.

    and x86 is obsolete. it is not the efficient out there so it is time for a major change in the hardware world.

"Mr. Spock succumbs to a powerful mating urge and nearly kills Captain Kirk." -- TV Guide, describing the Star Trek episode _Amok_Time_

Working...