Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD

AMD Talks About Internal Benchmarks for Opterons 295

ggruschow writes "AMD's CTO says their 2.0-Ghz Opteron (aka Hammer) beat a 2.8-Ghz Xeon (P4) on both SPECint2000 and SPECfp2000 tests, but was mixed against an Intel 1-Ghz Itanium 2 (details at ExtremeTech). IBM predicted "conservative" 1.8-Ghz PowerPC 970 scores, which fall in the middle of the pack (sweet for OS X). It's probably not a coincidence that AMD's news comes so soon after Gartner said x86-64 would fail. Even if Intel loses the performance crown again, their upcoming mobile processor is looking pretty spiff with its recently announced 1MB of cache. Sounds like next year might finally bring a worthy upgrade for my 486dx4-160."
This discussion has been archived. No new comments can be posted.

AMD Talks About Internal Benchmarks for Opterons

Comments Filter:
  • *sigh* (Score:2, Informative)

    by chefren ( 17219 )
    Who cares what processor is slightly slower or faster than others? You need at least a 10% difference in overall system performance to notice anyway.

    Darn, I missed fp by thinking...
    • Re:*sigh* (Score:5, Insightful)

      by tcdk ( 173945 ) on Thursday October 17, 2002 @04:28AM (#4467542) Homepage Journal
      For straight CPU intensive tasks it matters.

      But for 99% of normal peoples taskes 10% whont matter.

      But it's the edge and it has to be somewhere and it has to move.

      My rule is that I upgrade when I can get a cpu that is twice as fast as my old one for about 1000dkr (130$/).

      Thats possible right now (I've a 850Mh celeron), but I need a new motherboard, which kind of changes the rules.
      • But for 99% of normal peoples taskes 10% whont matter.

        "Doctors say that Nordberg has a 10 percent chance of living, though there's only a 50 percent chance of that. "

      • Re:*sigh* (Score:5, Informative)

        by tconnors ( 91126 ) on Thursday October 17, 2002 @04:59AM (#4467635) Homepage Journal
        For straight CPU intensive tasks it matters.

        But for 99% of normal peoples taskes 10% whont matter.


        10% never matters. We regularly run simulations here [swin.edu.au] that take a month. What is 10% on top of a month? 3 days. If you have already been waiting 30 days, what does another 3 matter? It probably corresponds to the weekend anyway.....
        • Re:*sigh* (Score:5, Insightful)

          by tcdk ( 173945 ) on Thursday October 17, 2002 @05:04AM (#4467647) Homepage Journal
          How many people are "we"?

          If you are ten people, one of them could be fired, by your argument, without anybody noticing.

          Let me turn it around - how many procent do you need before it matters? 12? 15?

          But I agree, one can't upgrade everytime theres a 10% speed increase. One has to do the cost/benefit thing carefully first (and then ignore the c/b and just spend, spend, spend - the only way to get the economy back on track ;)

          • Let me turn it around - how many procent do you need before it matters? 12? 15?

            Reminds me of the criterion I heard for how much of a pay increase is needed to induce people to leave their existing job for a new one.

            IIRC, 10% wasn't enough. People need 15-20% increases to motivate the trouble of switching.

            Not that computer speed and pay are really comparable...I think Bill G. is the only one whose pay has kept up with Moore's Law. Mine hasn't.

        • Re:*sigh* (Score:3, Insightful)

          by sql*kitten ( 1359 )
          10% never matters.

          On the contrary, if you can get by with spending 10% less on equipment (the other way of looking at this) than that can make the difference between being a solvent, viable company and everyone being out of work.

          You're at a university, so you are under no commercial pressure to deliver. I mean, once you're past undergrad assignment deadlines, research gets written when it gets written, right? You can't rush science, maaaan, pass the bong. But in the real world, there are real consequences, and 10% could make a real difference to computation-intensive jobs.
    • by Kjella ( 173770 ) on Thursday October 17, 2002 @08:09AM (#4468115) Homepage
      I don't pretend to feel the difference between 2.0GHz and 2.1GHz. I don't "feel the difference" when going from a HD with 3x20gb platters to 2x30gb platters. I don't feel the difference between PC3200 and PC2700.

      But I do feel it when I upgrade from an outdated system to a new one. And to know what kind of performance I could get for a reasonable* (*as defined by me ;) ) price, I do need to know what the state of the art is.

      Maybe that isn't relevant to you, maybe your 486 / Pentium / Duron / Space heater does what you want it to when you check your email and type up your word document, but not for all of us. I know a few tasks where I'd like 4gb+ of memory, solid-state SATA drive and a multi-GHz proc+, or a dual, for that matter.

      Large strides are best made one small step at a time. This is just another one of them.

      Kjella
    • I don't think speed is so much the issue for Intel.

      What they really want to do is to come out with a new architecture that no one can copy.

      AMD is still making use of old licensing deals with Intel that go back to the 80s and basically allow them to use x86 microcode etc.

      If Intel can get Itanium adopted, AMD is SOL... Itaniam will be a bitch to reverse engineer, and is not covered under any of those old pesky licensing deals.

      Sure, Intel is trying to advance the architecture, but the reason they're willing to spend whatever it takes to get Itanium accepted is because it removes all direct competition.

      As usual, the business world is more cynically motivated than it seems...
  • 486dx4-160? (Score:5, Funny)

    by acehole ( 174372 ) on Thursday October 17, 2002 @04:24AM (#4467529) Homepage
    You're weak my friend ;)

    You've got no holding power... hell i've still got my Commodore 64 with accoustic coupler modem, and i'll hold onto it until I see something worth spending money on...

    • Damn, I tried to mod "funny" and it entered it as "overrated". Stupid wheel mice.
    • C64 ? (Score:5, Funny)

      by stud9920 ( 236753 ) on Thursday October 17, 2002 @04:40AM (#4467583)
      hell i've still got my Commodore 64 with accoustic coupler modem
      You overcomfortable rich kid ! A C64 is just a toy with loads of eye candy. I am still doing it with my Difference Engine 2.0 and IPoAC (IP over avian carriers). More would just be superfluous luxury. Besides, shouldn't you have typed your message in all caps ?
      • Re:C64 ? (Score:4, Funny)

        by roundand ( 145497 ) on Thursday October 17, 2002 @05:37AM (#4467731) Homepage
        IPoAC (IP over avian carriers)

        That would be RFC1149 [ietf.org], right?
      • Well, you've got the most expensive networking equipment I've seen then.

        For IP over avian carriers to work, you need: a printer, preferably to microfilm, a scanner, preferably from microfilm, OCR software, and lots of avian carriers. Seems to me it would be far beyond the capabilities of the difference engine. What computer do you use to feed your difference engine the IP-protocol messages?

      • Ooh where did you get that prototype? AFAIK the product line was discontinued :p
    • Bah!!

      You youngins with your new fangled machines. I'll never give up my UNIVAC.

      Goto go and replace some valves now. See ya.

    • by XNormal ( 8617 ) on Thursday October 17, 2002 @06:50AM (#4467887) Homepage
      At work I've got a 49000 line Microsoft Visual C++ project that compiles in 5.5 minutes on a 1700 MHz Pentium 4. That's right, about 150 lines per second.

      Turbo Pascal used to compile at thousands of lines per second on machines with a clock nearly two orders of magnitude slower that tool several cycles per instruction instead of running several instructions per cycle.

      Before you say something like "hey, but moderns compilers have optimizations yadda yadda" perhaps I should mention that this compilation time was with no optimizations and features like updating browser files disabled. With optimization it's even slower.

      We're talking about four orders of magnitude difference in efficiency here. It's not all the compiler's fault, of course. The libraries and code use complex templates and multiple levels of definitions that make the compiler work much harder.

      At each one of these layers someone probably said "It's OK if this is 10 times slower. It's easier to write and maintain, I'm more productive (or lazy) and the CPU is fast enough". Each one of these decisions may be justified *in itself* but they add up (or rather multiply up) to a 1/10000 difference in efficiency. Slowing the edit/compile/debug cycle reduces programmer productivity and code quality. Reduced code quality to more code bloat and even slower edit/compile/debug cycle and so on.

      Damn, it's depressing.

      • Ehhh... (Score:3, Insightful)

        by fireboy1919 ( 257783 )
        Back when Pascal was prevalent...wait that never happened.

        Anyway, twenty years ago people didn't write thing modularly like they do today so recompiles were of a bigger piece of the project.

        Now we use modularity, so code is broken up into much smaller pieces. A recompile need only be the file you're working on - the other 50 of them can just stay compiled as they are. Obviously 'make' was developed specifically to optimize the decision of what needs to be recompiled.

        Sure, it is much, much slower. But linking takes very little time, and compile time has been cut way down by previous compiles - almost enough to make up the difference (although, I admit, not quite). Still, you're comparison is not the best - Pascal hardly has the powers available to a bigger programming language, and since its only been academic, not as much effort has been placed in making the compiler really smart (and therefore slower). Perhaps you should talk about Fortran '77?
      • "Slowing the edit/compile/debug cycle reduces programmer productivity and code quality."

        I guess, if you're compiling to figure out where your missing semi-colons are. Try working on a project where you can't tell whether your code works until there's a full build, and a full build takes 24 hours. You write quality code at that point, because you have to work top-down. No more write-compile-debug-write loops.
      • by Junks Jerzey ( 54586 ) on Thursday October 17, 2002 @09:34AM (#4468627)
        Turbo Pascal used to compile at thousands of lines per second on machines with a clock nearly two orders of magnitude slower that tool several cycles per instruction instead of running several instructions per cycle.

        Object Pascal (Delphi) still compiles that fast, only now it does include optimization (maybe not as hardcore as some C compilers, but still pretty good). Borland used to advertise speeds of 800,000 lines per minute, back in the day when a 266MHz Pentium II was a hot machine. For most projects, the compilation speed is *zero*. For medium sized projects, it's in the "barely perceptible" range (as in maybe 1/30 second). Very, very impressive.

        Why is it so fast? There are a variety of reasons, in rough order of importance:

        1. There are no header files. All exported identifiers are in the "interface" section of the main source file.
        2. Interface information is always precompiled into a lean format, so there's no need to #include giant files (kind of like having all headers always be precompiled).
        3. There's no preprocessor.
        4. "Object" files are stored in a lean "almost linked" intermediate format, rather than traditional, bulky object formats. This makes the linker a very simple and fast affair, but linking can be the slowest part of building a C++ project.
        5. The compiler, linker, and build manager are all in one executable, so there's no loading programs during compilation (typically for C++, make is loaded first, the compiler is loaded for each source file, then the linker is loaded at the end; yes, disk caching helps here).
        6. Object Pascal is generally a cleaner language than C and C++, so parsing and optimization are easier.
        • 1. According to Borland [borland.com], the name of the language is now simply "Delphi." This changed as of the release of Delphi 7 [borland.com].
          2. Borland C++ and Delphi use the same machine code generator engine, so the optimizations are largely the same. The performance is largely the same. As you said, Delphi is single pass, and parses a good bit faster.
          3. For those of you out there saying "huh? Pascal??? No one uses THAT??!?!" Guess again. It is used a lot more than you might think, typically by small, lean shops with insane deadlines like mine.
      • At work I've got a 49000 line Microsoft Visual C++ project that compiles in 5.5 minutes on a 1700 MHz Pentium 4. That's right, about 150 lines per second.

        Something must be seriously ate-up on your machine. I have a ~20000-line MFC project in VC++6. On a dual Athlon MP 1900+, I get three EXEs and two DLLs each in debug and release builds in about 50 seconds. On an Athlon XP 1600+, the compile time increases a little bit to 65 seconds. I know the P4 is a slower processor than the Athlons I'm running, but it shouldn't be that much slower. (If I had my old 1.0-GHz Athlon set up, I'd benchmark the build on that for sh*ts and grins.)

  • by Gldm ( 600518 ) on Thursday October 17, 2002 @04:25AM (#4467531)
    Benchmarks are nice and all, but I'm getting kinda tired about hearing how great a CPU benches for about 6 months before I could even buy one with a sack full of money.

    Not that I'm not excited about 64bit CPUs on the desktop, I could really find a use for one (I've got something interesting that likes to malloc more than 4GB sometimes).
  • boring... (Score:2, Insightful)

    by hatchet ( 528688 )
    Noone cares about few % performance gain anymore. And even if Opteron would be much faster, people wouldn't care much simply because you can't buy it. Pentium4 is better because you can get it *NOW*

    If you need new computer, buy it (NOW!), otherwise don't buy anything until you need it.
    • by Anonymous Coward
      Pentium 4s have no shared cache. uni-processor designs only.

      If you want DUAL cpus, or more, you have to go mac or AMD to get speed per dollar.

      and macs are twice as fast as the fastest AMD for rc5 benchmarks.

      a pentium 4 is a heatwasting joke once you start using 2 or more cpus.

      Apple is only selling dual cpu machines now. And when the dual core Power4 ships in 8 months or less, they mught be offereing 4 cpus economically as a stock product, even if they do not, many 3rd party dual cpu board suppliers for macs exist, such as Sonnet Technologies.
  • by Deton8 ( 522248 ) on Thursday October 17, 2002 @04:40AM (#4467578)
    Oviously there is a market for super-fast processors to those of us on /., but aren't we at a point where currently available processors are fast enough for more and more user segments? What I mean is, people who do Word and Excel were happy along about 800 MHz and ordinary CAD people like me don't need more than about 2 gig. There are only two guys in my organization (running VHDL simulations day in and day out) who have any need for faster processors. Will we soon get to a point where the total market size of gamers and /. people will not pay for another processor spin?
    • by Duds ( 100634 )
      But people were saying this back when that guy's 160 wasn't laughably slow.

      He can browse with it, why does the home user need more? That with linux or winNT and memory would do everything average Joe wants.

      The answer is A)marketing B)keeping up with the Jones' and C)Because there IS always something new for people to do.

      You won't stop CPU dev, there's always someone who could use it or some Redmond based multinational doing something to make it needed.

      No-one NEEDs more than a P100 tops. They CAN find a use for it though and that'll never changed. The reason can be summeried thusly.

      "Hey Ma, look at what this fancy computer can do!"
      • by ottffssent ( 18387 ) on Thursday October 17, 2002 @08:36AM (#4468255)
        No-one NEEDs more than a P100 tops


        Yeah, but only in the way than no-one NEEDs modern medicine, central heating, or citrus fruit during the winter.

        On the other hand, I NEED faster than a Duron/600 for:
        sending messages in ICQ (yup, sending a message is O(n) or O(n^2) - not sure which) with n the number of messages in your scrollback
        Encoding MP3s - I spent over 2 hours this afternoon switching CDs every 10-15 minutes.
        Recording TV - I can only record to divx at quarter VGA or less
        Using Mozilla the way I want (with 20-50 tabs open at a time and 128M of RAM cache)
        Using an encrypted filesystem (unless win2k's implementation is just horribly inefficient)
        Opening / manipulating 500M images

        Sure, I could plop an XP2200+ in here, but I spent $50 on the original CPU and I'm unwilling to spend more on another until Hammer comes out. A dual Clawhammer should be about 10-20x as fast as my current machine depending on app - a most satisfying upgrade.
    • Your forgetting one important factor in the computer-upgrade cycle

      microsoft.

      Sure, current computers will run word of 2 years time without (m)any worries, BUT, "innovation" has bumped up the required specs for every single windows/office release

      Of course its not just microsoft which bumps up required specs, but their the driving force behind most hardware upgrades

      As processors get faster, software gets both lazier and "smarter"...
      lazier 'cause theres less optimization and "smarter" 'cause, for example, 15 years ago no one would have ever implemented some of the stuff thats present in todays computers (fex image thumbnails in explorer)
      • Nah. Is free software any better? I sure wouldn't want to run Gnome, KDE, OpenOffice or Mozilla on an old PC (My old Cyrix 586 with 48MB ram is completely unusable for webbrowsing. But I can still use old wordprocessors, older versions of gcc, older games, etc without problems. Of course, the web is the only thing I can't get an old version of.
  • Benchmarks... (Score:5, Interesting)

    by e8johan ( 605347 ) on Thursday October 17, 2002 @04:43AM (#4467588) Homepage Journal
    Benchmarks are as bad as statistics. They measure nothing but how much you can tweak your CPU and compiler to fit that specific benchmark.


    I would say that AMD may have an advantage for being more backwards compatible than Itanium, but I also feel that it is time for a change!


    All major CPU manufacturers make proper RISC CPU already so why don't we find them in our ordinary computers? It is because the Windows codebase cannot simply be recompiled for a new target but has to be ported function by function (painful assignment, to say the least). Perhaps they can reuse 3/4 of the code, but still, there is a whole lot or rewriting and verification to do.

    I have worked in a Tru64 environment (running Alpha CPUs) and I was surprised of how easy it was to get 95% of the Linux apps to properly compile and run. I didn't try to get Linux it self running but I had gcc running and that was enough.

    What I'm trying to say is that the open source movement has proven that one can write portable code successfully and that it is time to make a hardware change. The serial ATA and AGP solutions from the PC are good enough, so is the PCI bus (lots of peripihals available) so I wouldn't change that, but simply make the standard computer run multiple RISC CPUs and a proper multi-threaded OS that can take advantage of that and then you'll have a performance boost that would make P4 look like a bicycle compared to a F1 car (ok, perhaps a Porche, but still, an F1 does 0-200kph in
    While I'm at the subject. As we have bochs, it would still be possible to run Windows in a VM, no matter what platform we use, so all M$ users could be happy, or do as ACorn did (does), have a PC as a extension card, i.e. run a PC natively in a window, just used the *fast* RISC CPU for any real work.
    • by Anonymous Coward
      WRONG! RISC "ordinary computers" exist!

      You wrote "why don't we find them in our ordinary computers"!

      In fact I am using one as I type this. It was built in 1996 (yes nineteen ninety six) and has a 800 Mhz G4 accelerator in it from Sonnet.

      Its my "internet" machine, I use other RISC machines for programming not wired to any external networks.

      It runs a wonderful version of Microsoft Office at full speed (RISC) and launches MS word in 2 seconds cold. (yes two seconds to flashing cursor).

      no intel emulation needed.

      its called a Macintosh

      millions of macs exist and millions of macs use one or more risc processors and almost no mac people I know ever wnat to emulate a pc running windows EVER if they can help it.

      RC5 and other benchmarks are twice as fast on standard macs than AMD, and Pentium 4s have no multi-cpu board designs...

      If you want to run thousands of high end commercial shrink wrapped products in RISC you can, but only on macintosh. And they run very well in the new Jaguar 10.2 (though faster in 8.6).


    • RISC is no panacea, there's no real reason why a RISC box is inheriently faster (in real world use) than a CISC one - they're just different architectures.

      The real reason wintel is still CISC is not Windows itself (NT4 for example is already ported to Alpha) but all the third party apps - people want to be able to run the xyz app they bought 5 years ago on their new box. This is why Intel is having fun trying to get their new non-backwards compatible architecture accepted widely.

      Oh and gcc isn't a Linux app, of course it was easy to recompile for other platforms. That's kind of the point of gnu.
      • I know that gcc isn't a Linux app, what I was trying to say is that applications *developed* in a Linux environment easily ports to other platforms. Even though the endianess and variable sizes may differ. This is due to good coding. Keep that up!
    • I won't argue about a change from X86 being desirable either but....

      IMHO Itanium just isn't the way to go. By some measure if X86 is warty, then Itanium most closely resembles Ben Grimm in his best orange. By other measures perhaps IA64 is a cleaner architecture, but it's proving to be a sonofagun to write compilers for. To me that portends a somewhat moribund future with a highly complex compiler on a highly complex architecture. Even incremental improvements, other than clock speed and cache size ramping will be difficult.
    • Ok, maybe they aren't quite the same thing yet, but the lines between the two have REALLY blurred.

      Just take a look at any modern RISC processor. Chances are it has several hundred instructions, ie they sure haven't "reduced" that instruction set by any significant amount. Than if you look at any modern CISC processor, you'll find that they just decode instructions into RISC-like ops internally. End result? The difference between RISC and CISC is REAL small these days.

      If you read about the design of the Power4 vs. the Athlon, you'll see that essentially ALL of the basic building blocks are the same, it's mainly just a matter of how many of those blocks there are and how they all fit together. If anyone thinks that the Power4 is so fast clock for clock vs. the Athlon is because of it's instruction set, they probably just haven't looked to see that this chip has tons of execution units, HUGE cache and a shitload of bandwidth. All things that could potentially be added to a chip like the Athlon if the economics of such would fit.

      Now, this isn't to say that x86 isn't without it's flaws, but most of those flaws are rather minor and have been worked around in compilers for years. The two biggest problems are the small number of registers and the stack-based floating point units. Well, Intel's SSE2 can now mostly replace the old floating point unit for the majority of tasks (though it typically isn't used as such yet), and AMD's upcoming Hammer/Operaton will double the number of registers available.
      • I'd always prefer a RISC CPU since the instruction set is more general. In RISCs there are usually only general purpose registers (i.e. no cx for loops, etc.) which yeilds less complexity both in the hardware and in the compilers.

        Since x86s now days are RISCs with a CISC shell, why not simply remove that extra layer of complexity and simply introduce a plain RISC architecture.

        If you want to know how *bad* the x86 is, simply try too boot of a floppy and enter protected mode. You enter the CPU in 16 bits mode, have to fiddle with some special reigster, make sure to take a jump and then you're in.
      • As for registers, I was very interested to find that a modern P4 maps the 8 x86 registers to 128 internal registers. Compare this to a G4 which only has 48 internal registers (32 visible, 16 rename).
    • All major CPU manufacturers make proper RISC CPU already so why don't we find them in our ordinary computers?

      Someone already pointed out that Macs use RISC CPUs but in fact all modern x86 chips are really RISC cores with a translation layer from x86->RISC. Also most compilers optimize to a very RISC-like subset of x86. So you see, x86 has managed to evovle so it has most of the advantages of RISC plus the all important legacy support. This sort of thing is how x86 has managed to survive so long and why that's not nececarily a bad thing.
  • Clawhammer (Score:5, Informative)

    by Perdo ( 151843 ) on Thursday October 17, 2002 @04:43AM (#4467591) Homepage Journal
    Clawhammer (Athlon) has a single 16 bit wide hyper transport bus.

    The workstation Sledgehammer (Opteron) has two 16 bit busses

    The server Sledgehammer (Opteron) has three 16 bit busses

    The spec results are as follows:

    Spec_int

    PIII1G 426
    G4 1ghz 306
    G5 937 (IBM PowerPC 970)
    2.8Ghz p4 1010
    XP 2800 933
    Itanium 1Ghz 810
    Power4 1300 804
    Clawhammer 2.0 Ghz 1202

    Spec_fp

    PII 1Ghz 426
    G4 1Ghz 187
    2.8 Ghz p4 947
    XP 2800 782
    Itanium 1Ghz 1356
    Power4 1300 1169
    Clawhammer 2.0Ghz 1170

    Opteron??? Higher than clawhammer considering the multiple hyper transport busses 1/2 mb L2 (compared to clawhammer's 256/512 l2) and dual on chip DDR memory controllers compared to Clawhammers single memory controller

    Bootleg Powerpoint Presentation:

    http://130.236.229.26/download/misc/AMD-Opteron. pp t

    and

    http://a26.lambo.student.liu.se/download/misc/AM D- Opteron.ppt

    Read the Show notes! AMD failed to edit them out

    Filename is AMD-Opteron.ppt google search it.

    Includes a system that is an Opteron workstation dualed with a clawhammer that still presents itself as a single proc system. The clawhammer acts as a math co-processer :)
  • by Anonymous Coward on Thursday October 17, 2002 @04:45AM (#4467599)
    I hope THIS mask rev of Opteron (Hammer) chip will be faster than January 2002 PowerPC G4 chips.

    Currently, according to the RC5 benchmarks AMD is far slower than dual cpu macintoshes (half as fast). (source available for cor rc5 loops for most processors). RC5 was silently completed in June or so but a bug went unnoticed for a couple months, but the contest is over. They measured performance in units of "Mac poerbooks" in their press releases.

    The Mac Dual 1 Ghz g4 is faster than all existing dual AMD motherboards in RC5 benchmark by almost 100%.

    21,129,654 RC5 keyrate for dual 1 Ghz g4 system ! And Now apple sells dual 1.25 Ghz stock which would be even faster.

    A dual 1800+ AMD MP gets only HALF as many as a Mac! 10,807,034 rc5 keys !

    Funny "Mhz myth" there showing itself I guess... Apple now is selling even FASTER machines but with smaller caches and less fast read-write ram (it now uses DDR on newest boxes).

    And the macs are using low power g4 chips meant for microcontroller usages with very little predictive branching and a simple 7 stage RISC pipeline depth. (macs complete many many instructions per cycle though, unlike Pentiums).

    The mac I mentioned uses a 2 MB L3 cache and no AMD MP dual cpu boards I know about have any L3 cache at all, so maybe that is whay some common macs are over twice as fast, its not just altivec meager tweaks to rc5. AMS have similar , but less mazing vector ops.

    Another reason the mac might be over twice as fast as an amd dual mp board is not just the 2MB l3 cache but the fact that mac can read and write to a cold page of memory simulatneously FASTER than any AMD MP designs which are biased for linear access and streaming. Many memory scatter benchmarks show this too. Appels newest DDR-RAM machines might not offer this feature though.

    So basically, will the new Hammer systems be able to get close to speed for RC5 and other crypto tasks as the RISC based Powerpcs?

    I really want to know. And I am so sad to see Slashdot reduced to fanboys modding down anything discussing tech subjects like this as "flames" all the damned time. This post is all informatinve and factual and my reason for asking is genuine.

    http://www.research.ibm.com/journal/rd46-1.html has 5 LARGE technical articles on how the POWER4 chip was designed... in PDF form too. Even if you do not appreciate the Power4 (which apple is using a dual-core version of in many months) you might want to read these PDFs because they are all about chip design.

    They put the floating point on the corners of the chip die to help spread heat, etc. Hundreds of interesting facts and pictures on at that site.

    Top500.org lists Power3 dominating the cluster speeds of the top 500 computer clusters for memory+float speed. Power4 will soon start appearing in that list as well as the "lite" version with only 2 MB of cache instead of 4,6, and 16 MB.

    Plus the new chip apple will start using announced yesterday, will have SIMD "VMX" or Velocity Engine added (Moto calls theirs"altivec").... only 90% of altivecs hundreds of opcodes will be offerred though.

    With Pricewatch showing cheapest 800Mhz Itanium bare cpu at almost 8 THOUSAND dollars, and 3.5 thousand for the old itanium 700 Mhz, it does not take a financial genius to see why apple's workstations are selling so well nowadays.

    • "only 90% of altivecs hundreds of opcodes will be offerred though."

      Source?

      Altivec is 162 instructions, and the Microprocessor forum brief on the GPUL stated "over 160 instructions"
    • I remember this IRC log [slashnet.org] from a while back. In a nutshell, they said that the PowerPC architechture (namely AltiVec) is well suited for RC5 since it has nice hardware bit rotates, and RC5 uses rotate A LOT.
      [acidblood] More registers available (32 in the PowerPC versus 8 in MMX and SSE2), plus 128-bit wide registers (MMX is only 64-bit wide), and the existence of a hardware vector rotate instruction in Altivec, which isn't available in MMX and SSE2.
      Is RC5 a useful benchmark if it mainly tests the bit rotate performance? Does Intel/AMD really care if their RC5 keyrate is low? Are you going decide which CPU to get next based on bit rotate performance?
    • I hope THIS mask rev of Opteron (Hammer) chip will be faster than January 2002 PowerPC G4 chips.

      [...] The Mac Dual 1 Ghz g4 is faster than all existing dual AMD motherboards in RC5 benchmark by almost 100%.

      [..] Funny "Mhz myth" there showing itself I guess... Apple now is selling even FASTER machines [...]

      I can see the new "Switch" ad now (white background, jerky cuts):

      "I'm a network administrator and so are my friends" "We steal computer power from our employers, at school, wherever we can find it, to run this Are See Five thing"

      "Peace, love, and strong crypto"

      "So I noticed the Apple computers were pretty fast at kicking out keyblocks" "I had to have one"

      "Say it with me: Brute-force known-plaintext attacks" "That's what makes a computer cool"

      "If I'm going to spend a few thousand dollars on a computer, it's gotta be the best at at least one thing"

      "Hi, I'm Anonymous Coward. I'm a crack user."

      [Apple logo]

      Cmon. The estimated SPECint numbers are wonderful news. They're a lot closer to reflecting what most of us do with these machines than key-agile stream ciphers. Beating up x86 weenies with the RC5 key rates will just make them buy a couple of $400 Athlons to stick in the closet and gloat about price/key/sec performance. (That's counting electricity too.)

    • Well here is a benchmark [distributed.net] of the RC5 speeds for various processors. Yes the PowerPC does kick some major arse. Why is the question, and here [distributed.net] is the answer. Anyway, long story short I heard there is a nice barrel shifter in the PowerPC that makes them excellent candidates for the RC5 client. So as they said in the second link the RC5 contest is not a good benchmark for performance. Although, it is sweet how fast the PowerPCs cores are!

      JOhn
    • by acidblood ( 247709 ) <decio@@@decpp...net> on Thursday October 17, 2002 @11:19AM (#4469546) Homepage
      I suggest you read the distributed.net Slashnet forum [slashnet.org], where I explain why the G4 performs faster than x86 processors. Summarizing:
      • RC5 is completely parallelizable, so you could theoretically do as many simultaneous operations as you have execution units on your processor, as long as there's enough registers to mask memory load latency. Obviously, there's many more registers on PowerPC architectures than on x86.
      • The distributed.net core uses the Altivec SIMD extension on the G4, which has a useless rotate instruction, which serves absolutely no purpose that I know of on anything other than RC5 encryption. So I see Intel's point in not including a rotate instruction in SSE2: bit rotation is a completely useless operation except for RC5. Did I make my point clear enough? However, that makes it difficult to use SSE2, given the limited amount of registers available, coupled with the need to emulate a rotate instruction by means of shifts, ORs and an additional temporary register.

      It must be clear that, if Intel had included an SSE2 rotate op, the P4 would easily beat a G4, not at the same clock speed, but given that a G4 can't scale as well as a P4 it wouldn't matter anyway.

      Hammer can't get any better on RC5 without an instruction set overhaul. Athlons already do pipelined scalar integers rotates in 1 clock cycle, it's impossible to beat that.

      Also, please do not generalize G4's distributed.net RC5 speed to a ``PowerPC superiority in crypto tasks,'' because it makes me want to laugh really hard at your cluelessness. SIMD is completely useless in real-world crypto applications: when you use a cypher in Output Feedback mode, which is how stuff is done in the real world when you're encrypting data instead of trying to break keys, you need to know the output of the last crypto operation to mix in the next operation. It should be obvious that you can't do operations in parallel now, so SIMD becomes useless and the Athlon goes back to being faster than the G4 at the same clock rate, and of course much faster on commercially available speed rates.

      Oh, and the larger cache you mentioned has absolutely ZERO effect over RC5 performance. RC5 memory usage for each key being encrypted/decrypted is:
      • number of bits in key rounded to the next 32-bit multiple (64 bits in RC5-64, 96 bits in RC5-72)
      • number of cyphers round plus one, times 8 bytes (12 rounds in the RSA Secret Key challenge equals 104 bytes)
      • 8 bytes for two temporary variables, which hold the plaintext before encryption and the cyphertext after encryption, or the cyphertext before decryption and the plaintext after decryption.

      As you can see, even if you take into account loop control variables and whatever else, it boils down to less than 150 bytes per key. You could probably fit a 60-wide superscalar core on the P4's measly 8 KB L1 cache.
      • I've used bit rotate operations for bitmaps and multiplications/divisions by powers of two. They can also be used in some cases for serial transmission of data. They're not completely useless.
      • The distributed.net core uses the Altivec SIMD extension on the G4, which has a useless rotate instruction, which serves absolutely no purpose that I know of on anything other than RC5 encryption.

        I'll admit I don't know Altivec too well. But I can pretty much guarantee you that a SIMD rotate instruction would be fairly handly on a reasonable number of crypto algorithms (RC6 and MARS come immediately to mind). Assuming it's doing what I figure it's doing based on your statement.

        BTW, SIMD is useful in some crypto algorithms. In particular, I'm thinking of UMAC16, which was designed to be used with MMX or AltiVec. Yes, it most sitiations it's hard or impossible to run the high-level operations in parallel (though you can with Counter mode and when decrypting CBC -- they can both be done infinitely in parallel). And some algorithms do have operations internally that can be implemented with SIMD (mostly by design).
  • Windows XP (Score:5, Informative)

    by droyad ( 412569 ) on Thursday October 17, 2002 @05:19AM (#4467686)
    I hear that people are saying it would be difficult to port Windows XP to RISC chips (and new 64bit arch). This infact is not true. In the Windows NT family there are 2 features that make it easy:

    1) It's mostly written in c/c++
    2) The HAL (Harware Abstraction Layer) contains most of the platform specific code. As I understand it the kernel does not actually handle the hardware directly

    Ofcourse I can see it going like this:
    1) Apple, Intel, AMD and Moterola put forward new Chip designs
    2) They ask MS to support it with their OS
    3) MS picks Intel

    --

    $vi any_article_on_iraq
    :s/iraq/microsoft/gi
    :s/Weapons of mass destruction/Windows/gi
    :s/Axis of evil/Redmond/gi
    :s/In this post september 11 climate/Service Pack 1/gi
    :s/Bush/Linux/gi
    :wq

  • 486dx4-160 (Score:5, Funny)

    by clinko ( 232501 ) on Thursday October 17, 2002 @05:53AM (#4467764) Journal
    486dx4-160? No wonder you crazy linux folks hate windows. You haven't bought a computer since 1995.
  • by Ocelaris ( 448953 ) on Thursday October 17, 2002 @06:11AM (#4467805)
    I think the point of getting more powerful processors is not just for everyday use, but increasing the overall computing power in the world. Imagine getting back the results from Folding@Home in a week, rather than a couple years... sequencing genomes etc... There are very valid purposes for computationally powerful machines, just because WE don't know of any (in our daily lives), doesn't mean that there aren't any (hehe, agnostic argument).

    If someone were to say to me, that the number of kids on computers today doing the things they do was not directly related to computational power, I wouldn't believe them. The more power, the further the abstraction from what computers really are underneath, hence the broader user base.

    If my old computer that my mom uses were 100x as powerful, it would be smart enough to go look online as to why it's having errors printing, and I'd never have to venture out of my cave in the basement :-) Good enough reason for me.
  • Gartner says a lot of things. Didn't they say Linux would fail a couple years ago? Then didn't they recently publish something else saying Linux would make great strides this/(last?) year?

    It's just mind boggling that people take them seriously...

  • by Neil Watson ( 60859 ) on Thursday October 17, 2002 @08:44AM (#4468305) Homepage
    I think the industry has to stop being blinded by clock speed. Before you can improve the speed of the chip there are still bottle necks on the motherboards (e.g. PCI bus, Disk controllers). Also, there is the problem of power consumption and heat.

    I think a better approach for the future are smaller less power hungry modular CPUs. We've all seen the evidence of the clusters that makeup super computers. What if all standard computers came with 4 CPUs that used the same power as the P4 today? What if, instead of buying a newer faster computer, you could add CPUs like expansion cards but, at a reasonable price?

    • Today, SMP code usually requires the code to be written to take advantage of multiple CPUs. There are compilers out that can do some automated threading (and have been a while) but many threaded applications are threaded by hand. Basically, we'd need better compilers and OSs to go along with those computers than we have now -- compilers that can make runtime decisions on how many threads to fork/etc and OSs that can report system resource reports accurrately to the programs.

      That being said, your term "power" is heavily overloaded here... I'm sure you can put 4 G4 processors into a box and the total (electrical)power usage of the 4 G4s would be comparable (or less) than a P4. If you are talking about four processors that are basically 1/4 of the computational power of a P4 (so four of them equal a P4), some applications will still need higher 'power' so that they can finish in times comparable to today. To paraphrase an old saying, a process is only as fast as its slowest thread =)
    • If people were doing more threading or planning to actively run more processes at once, then SMP would be more attractive. Unfortunately too few applications make use of multiple processors, and too few operating systems provide relocatable threads.

      P4 hyperthreading will hopefully get people into threading. Athlon will have slick four way and eight way multiprocessing with hammer when it finally rolls out. Halfway to 2003. I'm a student so I won't be buying until it comes out... That's what you get for delaying to add palladium you bastards.

  • paradigm shift... (Score:4, Informative)

    by john_uy ( 187459 ) on Thursday October 17, 2002 @09:30AM (#4468597)
    i think the new release of hammer lines will be very difficult for amd. intel is one step ahead. if you see right now, they are already announcing next generation product lines in all fronts. like banias in cpu, ultra low voltage and integrated chips for small devices, extremely high speed chips for network devices.

    i believe intel has shifted its focus in the battle of the desktop cpus. while amd is just playing catch up, intel now is already looking at what consumers will benefit from. maybe intel has realized that the speed today is an overkill for majority of today's needs. they are just speeding up their chips to keep up with moore's law.

    but look at their products, right now, they are focusing on making things smaller, lightweight, ultra low power consumption, low heat devices, integration. the future is not on desktop computers requiring very high speed cpu but mobile devices such as phones, pda, tablets, etc. intel will be a clear winner (if only i have humongous money so i can buy intel stocks at discount.)

    they have good engineers that produce good results. right now, they are already producing better chipsets for their server product lines, maybe a few years, they will no longer rely on broadcom's serverworks.

    they are also picking up on their storage chips. from all the raid controllers in the market, i hardly see a card that does not have an intel 960 i2o processor or their new ixp processors.

    their network and communication is very dynamic. like introducing 10gigabit products today (even with the downturn of telecoms.) enabling encryption and decription at 10gb/s is no joke. maybe a few years from now, we will see intel as chips in those network gear from cisco, et al.

    they are now focusing on wireless integration. few years from now, capacitors and resistors will be in a silicon chip. it is the future, and they are very lucky to realize that. when the economy recovers, intel will clearly be a winner.

    and for the server, i would want to say this. i believe amd will produce good cpu. but that is just half of the story, amd is not emphasizing any good chipsets/system to come with it including support pci-x at 133mhz with hotplug slots, interleaved memory with chipkill(tm), good server management, good integration.

    (as one who decides what to purchase in a server,) amd must make a lot of effort before i will take them seriously. their cpu is not enough for me to get their system, yet.

    let's just wait and see, but i see that intel will always be a step ahead. now for amd, the challenge is to be at par or even be ahead of intel.

  • Sounds like next year might finally bring a worthy upgrade for my 486dx4-160

    I love it when people who never used prepentium systems try to talk like they did.. Everyone knows that a dx4 ran at 100mhz.
  • by Namarrgon ( 105036 ) on Thursday October 17, 2002 @11:13AM (#4469478) Homepage
    Tech Report [tech-report.com] are reporting a story [theinquirer.net] at the Inquirer which quotes AMD indicating it has "changed its roadmap schedule".

    They're saying that Barton will be here 1Q03, Sledgehammer is due 1H03, but now ClawHammer may be delayed until 2H03!

    Arghh. I thought the point was to do a 64 bit CPU without requiring an Itanium schedule...

  • The ultimate mobile processor should have a power saving mode that runs slower and won't burn your lap. My main prob with laptops is that you can no longer use them on your lap. They run too hot. This is of course due to the CPU, RAM and hard drive (maybe cdrom if spinning). But the CPU is on the most and runs the hottest of all those. They only put 4200 or 5400 rpm HDs in those machines so the HD can't get as hot as the CPU seems to get.
    Course it should also have a mode that burns through the case, but gets you those extra fragging frames on Q3 :)
  • by be-fan ( 61476 ) on Thursday October 17, 2002 @12:55PM (#4470458)
    I hate it whenever Mac-heads point to PPC and show how its such a great example of RISC that runs "all you're programs 2x as fast as the fastest Pentium4!" In all reality, the PowerPC line (not necessarily the POWER line) are very unimpressive. These days, a 1.25 GHz Alpha can still hold its own against a 2.5 GHz P4 in terms of floating point power. Yes, the same Alpha that has been neglected for the last half-decaded whose design has stagnated since the 21264 and whose process technology is antique compared to AMD's and Intel's. But the Alpha still keeps kicking x86 in the head. Yet, the PowerPC, running at the same 1.25 GHz, backed by the dual giants Motorola and IBM, built with leading edge copper fab technology, the second most common desktop RISC architecture (after x86 :) shipping in every single Apple computer isn't even competitive with the P4. Damn you DEC! Damn you to all hell!
  • What I want to see is how it handles memory intensive benchmarks. I think this may be where it will shine, with the DDR interface built directly into the processor, thus eliminating latency and bottlenecks imposed by the north bridge.

    The other big advantage most people seem to forget is the amount of memory addressing capability. Where I work, we have racks of Linux X86 servers with 6GB of memory each. While there are hacks to go beyond 4GB, it gets kind of ugly. With Opteron, addressing 6GB or more of memory is not a problem.

    Also, with their Hypertransport bus and supporting multiple processors, the amount of memory scales with the number of CPUs.

    -Aaron

PURGE COMPLETE.

Working...