
AMD Talks About Internal Benchmarks for Opterons 295
ggruschow writes "AMD's CTO says their 2.0-Ghz Opteron (aka Hammer) beat a 2.8-Ghz Xeon (P4) on both SPECint2000 and SPECfp2000 tests, but was mixed against an Intel 1-Ghz Itanium 2 (details at
ExtremeTech). IBM predicted "conservative" 1.8-Ghz PowerPC 970 scores, which fall in the middle of the pack (sweet for OS X). It's probably not a coincidence that AMD's news comes so soon after Gartner said x86-64 would fail. Even if Intel loses the performance crown again, their upcoming mobile processor is looking pretty spiff with its recently announced 1MB of cache. Sounds like next year might finally bring a worthy upgrade for my 486dx4-160."
*sigh* (Score:2, Informative)
Darn, I missed fp by thinking...
Re:*sigh* (Score:5, Insightful)
But for 99% of normal peoples taskes 10% whont matter.
But it's the edge and it has to be somewhere and it has to move.
My rule is that I upgrade when I can get a cpu that is twice as fast as my old one for about 1000dkr (130$/).
Thats possible right now (I've a 850Mh celeron), but I need a new motherboard, which kind of changes the rules.
Re:*sigh* (Score:2)
"Doctors say that Nordberg has a 10 percent chance of living, though there's only a 50 percent chance of that. "
Re:*sigh* (Score:5, Informative)
But for 99% of normal peoples taskes 10% whont matter.
10% never matters. We regularly run simulations here [swin.edu.au] that take a month. What is 10% on top of a month? 3 days. If you have already been waiting 30 days, what does another 3 matter? It probably corresponds to the weekend anyway.....
Re:*sigh* (Score:5, Insightful)
If you are ten people, one of them could be fired, by your argument, without anybody noticing.
Let me turn it around - how many procent do you need before it matters? 12? 15?
But I agree, one can't upgrade everytime theres a 10% speed increase. One has to do the cost/benefit thing carefully first (and then ignore the c/b and just spend, spend, spend - the only way to get the economy back on track
Re:*sigh* (Score:2)
Let me turn it around - how many procent do you need before it matters? 12? 15?
Reminds me of the criterion I heard for how much of a pay increase is needed to induce people to leave their existing job for a new one.
IIRC, 10% wasn't enough. People need 15-20% increases to motivate the trouble of switching.
Not that computer speed and pay are really comparable...I think Bill G. is the only one whose pay has kept up with Moore's Law. Mine hasn't.
Re:*sigh* (Score:2, Insightful)
Basically, I find that I sometimes get into situations analogous to going to a bus stop. If you can get there in 8 minutes instead of 10, usually it makes no difference, but sometimes you can catch the bus when otherwise you would have to wait for another one.
What this means in my work, is that I might miss out on an open slot in the LFS batch queue. Or for a job that lasts several days, a few hours can make the difference between being able to present results at the meeting this week instead of having to wait until next week.
So I am glad to get 10% additional performance... but if I'm spending my own money on it, I'm probably not willing to spend any more than 10% additional dollars to get it.
Re:*sigh* (Score:3, Insightful)
On the contrary, if you can get by with spending 10% less on equipment (the other way of looking at this) than that can make the difference between being a solvent, viable company and everyone being out of work.
You're at a university, so you are under no commercial pressure to deliver. I mean, once you're past undergrad assignment deadlines, research gets written when it gets written, right? You can't rush science, maaaan, pass the bong. But in the real world, there are real consequences, and 10% could make a real difference to computation-intensive jobs.
Re:*sigh* (Score:2, Informative)
So it's fairly close.
Re:*sigh* (Score:2)
Re:*sigh* (Score:2)
No, AMD just barely hit 2.0 GHz with the Athlon 2400+ (the fastest Athlon available now), and the much hyped release of the 2700+ and 2800+ that were announced about 2 months ago has been delayed until the end of November.
And another ten, and another ten... (Score:5, Interesting)
But I do feel it when I upgrade from an outdated system to a new one. And to know what kind of performance I could get for a reasonable* (*as defined by me
Maybe that isn't relevant to you, maybe your 486 / Pentium / Duron / Space heater does what you want it to when you check your email and type up your word document, but not for all of us. I know a few tasks where I'd like 4gb+ of memory, solid-state SATA drive and a multi-GHz proc+, or a dual, for that matter.
Large strides are best made one small step at a time. This is just another one of them.
Kjella
Re:And another ten, and another ten... (Score:2)
Although these days I don't feel the difference between 1.0GHz and 2.0GHz, and I'm a software developer. I think that riles up some of the hardware fanboys, but it's true.
Re:*sigh* (Score:2)
What they really want to do is to come out with a new architecture that no one can copy.
AMD is still making use of old licensing deals with Intel that go back to the 80s and basically allow them to use x86 microcode etc.
If Intel can get Itanium adopted, AMD is SOL... Itaniam will be a bitch to reverse engineer, and is not covered under any of those old pesky licensing deals.
Sure, Intel is trying to advance the architecture, but the reason they're willing to spend whatever it takes to get Itanium accepted is because it removes all direct competition.
As usual, the business world is more cynically motivated than it seems...
486dx4-160? (Score:5, Funny)
You've got no holding power... hell i've still got my Commodore 64 with accoustic coupler modem, and i'll hold onto it until I see something worth spending money on...
MOD PARENT UP!! (Score:2)
C64 ? (Score:5, Funny)
Re:C64 ? (Score:4, Funny)
That would be RFC1149 [ietf.org], right?
Re:C64 ? (Score:2)
For IP over avian carriers to work, you need: a printer, preferably to microfilm, a scanner, preferably from microfilm, OCR software, and lots of avian carriers. Seems to me it would be far beyond the capabilities of the difference engine. What computer do you use to feed your difference engine the IP-protocol messages?
Re:C64 ? (Score:2)
Re:kids nowadays... (Score:3, Informative)
Re:486dx4-160? (Score:2, Funny)
You youngins with your new fangled machines. I'll never give up my UNIVAC.
Goto go and replace some valves now. See ya.
Old hardware, old software and efficiency (Score:5, Insightful)
Turbo Pascal used to compile at thousands of lines per second on machines with a clock nearly two orders of magnitude slower that tool several cycles per instruction instead of running several instructions per cycle.
Before you say something like "hey, but moderns compilers have optimizations yadda yadda" perhaps I should mention that this compilation time was with no optimizations and features like updating browser files disabled. With optimization it's even slower.
We're talking about four orders of magnitude difference in efficiency here. It's not all the compiler's fault, of course. The libraries and code use complex templates and multiple levels of definitions that make the compiler work much harder.
At each one of these layers someone probably said "It's OK if this is 10 times slower. It's easier to write and maintain, I'm more productive (or lazy) and the CPU is fast enough". Each one of these decisions may be justified *in itself* but they add up (or rather multiply up) to a 1/10000 difference in efficiency. Slowing the edit/compile/debug cycle reduces programmer productivity and code quality. Reduced code quality to more code bloat and even slower edit/compile/debug cycle and so on.
Damn, it's depressing.
Ehhh... (Score:3, Insightful)
Anyway, twenty years ago people didn't write thing modularly like they do today so recompiles were of a bigger piece of the project.
Now we use modularity, so code is broken up into much smaller pieces. A recompile need only be the file you're working on - the other 50 of them can just stay compiled as they are. Obviously 'make' was developed specifically to optimize the decision of what needs to be recompiled.
Sure, it is much, much slower. But linking takes very little time, and compile time has been cut way down by previous compiles - almost enough to make up the difference (although, I admit, not quite). Still, you're comparison is not the best - Pascal hardly has the powers available to a bigger programming language, and since its only been academic, not as much effort has been placed in making the compiler really smart (and therefore slower). Perhaps you should talk about Fortran '77?
Re:Old hardware, old software and efficiency (Score:2)
I guess, if you're compiling to figure out where your missing semi-colons are. Try working on a project where you can't tell whether your code works until there's a full build, and a full build takes 24 hours. You write quality code at that point, because you have to work top-down. No more write-compile-debug-write loops.
Re:Old hardware, old software and efficiency (Score:5, Insightful)
Object Pascal (Delphi) still compiles that fast, only now it does include optimization (maybe not as hardcore as some C compilers, but still pretty good). Borland used to advertise speeds of 800,000 lines per minute, back in the day when a 266MHz Pentium II was a hot machine. For most projects, the compilation speed is *zero*. For medium sized projects, it's in the "barely perceptible" range (as in maybe 1/30 second). Very, very impressive.
Why is it so fast? There are a variety of reasons, in rough order of importance:
1. There are no header files. All exported identifiers are in the "interface" section of the main source file.
2. Interface information is always precompiled into a lean format, so there's no need to #include giant files (kind of like having all headers always be precompiled).
3. There's no preprocessor.
4. "Object" files are stored in a lean "almost linked" intermediate format, rather than traditional, bulky object formats. This makes the linker a very simple and fast affair, but linking can be the slowest part of building a C++ project.
5. The compiler, linker, and build manager are all in one executable, so there's no loading programs during compilation (typically for C++, make is loaded first, the compiler is loaded for each source file, then the linker is loaded at the end; yes, disk caching helps here).
6. Object Pascal is generally a cleaner language than C and C++, so parsing and optimization are easier.
Re:Old hardware, old software and efficiency (Score:3, Interesting)
2. Borland C++ and Delphi use the same machine code generator engine, so the optimizations are largely the same. The performance is largely the same. As you said, Delphi is single pass, and parses a good bit faster.
3. For those of you out there saying "huh? Pascal??? No one uses THAT??!?!" Guess again. It is used a lot more than you might think, typically by small, lean shops with insane deadlines like mine.
Re:Old hardware, old software and efficiency (Score:2)
Something must be seriously ate-up on your machine. I have a ~20000-line MFC project in VC++6. On a dual Athlon MP 1900+, I get three EXEs and two DLLs each in debug and release builds in about 50 seconds. On an Athlon XP 1600+, the compile time increases a little bit to 65 seconds. I know the P4 is a slower processor than the Athlons I'm running, but it shouldn't be that much slower. (If I had my old 1.0-GHz Athlon set up, I'd benchmark the build on that for sh*ts and grins.)
Yawn, wake me when it ships. (Score:4, Insightful)
Not that I'm not excited about 64bit CPUs on the desktop, I could really find a use for one (I've got something interesting that likes to malloc more than 4GB sometimes).
Re:Yawn, wake me when it ships. (Score:2, Funny)
Woah, you do open a lot of porn with mplayer!
Re:Yawn, wake me when it ships. (Score:3, Funny)
Mozilla?
Re:Yawn, wake me when it ships. (Score:2)
>Mozilla?
No, he said interesting.
(Disclosure: This was posted using Mozilla.)
boring... (Score:2, Insightful)
If you need new computer, buy it (NOW!), otherwise don't buy anything until you need it.
Pentium 4s have no shared cache. uni-processor (Score:2, Informative)
If you want DUAL cpus, or more, you have to go mac or AMD to get speed per dollar.
and macs are twice as fast as the fastest AMD for rc5 benchmarks.
a pentium 4 is a heatwasting joke once you start using 2 or more cpus.
Apple is only selling dual cpu machines now. And when the dual core Power4 ships in 8 months or less, they mught be offereing 4 cpus economically as a stock product, even if they do not, many 3rd party dual cpu board suppliers for macs exist, such as Sonnet Technologies.
Re:Pentium 4s have no shared cache. uni-processor (Score:2, Informative)
2) Macs won't be shipped with POWER4's in them, they'll _probably_ be shipped with PowerPC 970s (which are effective single core Power4's + VMX)
How much is adequate? (Score:4, Interesting)
Re:How much is adequate? (Score:3, Insightful)
He can browse with it, why does the home user need more? That with linux or winNT and memory would do everything average Joe wants.
The answer is A)marketing B)keeping up with the Jones' and C)Because there IS always something new for people to do.
You won't stop CPU dev, there's always someone who could use it or some Redmond based multinational doing something to make it needed.
No-one NEEDs more than a P100 tops. They CAN find a use for it though and that'll never changed. The reason can be summeried thusly.
"Hey Ma, look at what this fancy computer can do!"
Re:How much is adequate? (Score:5, Insightful)
Yeah, but only in the way than no-one NEEDs modern medicine, central heating, or citrus fruit during the winter.
On the other hand, I NEED faster than a Duron/600 for:
sending messages in ICQ (yup, sending a message is O(n) or O(n^2) - not sure which) with n the number of messages in your scrollback
Encoding MP3s - I spent over 2 hours this afternoon switching CDs every 10-15 minutes.
Recording TV - I can only record to divx at quarter VGA or less
Using Mozilla the way I want (with 20-50 tabs open at a time and 128M of RAM cache)
Using an encrypted filesystem (unless win2k's implementation is just horribly inefficient)
Opening / manipulating 500M images
Sure, I could plop an XP2200+ in here, but I spent $50 on the original CPU and I'm unwilling to spend more on another until Hammer comes out. A dual Clawhammer should be about 10-20x as fast as my current machine depending on app - a most satisfying upgrade.
Re:How much is adequate? (Score:2)
=P
Re:How much is adequate? (Score:3, Insightful)
microsoft.
Sure, current computers will run word of 2 years time without (m)any worries, BUT, "innovation" has bumped up the required specs for every single windows/office release
Of course its not just microsoft which bumps up required specs, but their the driving force behind most hardware upgrades
As processors get faster, software gets both lazier and "smarter"...
lazier 'cause theres less optimization and "smarter" 'cause, for example, 15 years ago no one would have ever implemented some of the stuff thats present in todays computers (fex image thumbnails in explorer)
Re:How much is adequate? (Score:2)
Benchmarks... (Score:5, Interesting)
I would say that AMD may have an advantage for being more backwards compatible than Itanium, but I also feel that it is time for a change!
All major CPU manufacturers make proper RISC CPU already so why don't we find them in our ordinary computers? It is because the Windows codebase cannot simply be recompiled for a new target but has to be ported function by function (painful assignment, to say the least). Perhaps they can reuse 3/4 of the code, but still, there is a whole lot or rewriting and verification to do.
I have worked in a Tru64 environment (running Alpha CPUs) and I was surprised of how easy it was to get 95% of the Linux apps to properly compile and run. I didn't try to get Linux it self running but I had gcc running and that was enough.
What I'm trying to say is that the open source movement has proven that one can write portable code successfully and that it is time to make a hardware change. The serial ATA and AGP solutions from the PC are good enough, so is the PCI bus (lots of peripihals available) so I wouldn't change that, but simply make the standard computer run multiple RISC CPUs and a proper multi-threaded OS that can take advantage of that and then you'll have a performance boost that would make P4 look like a bicycle compared to a F1 car (ok, perhaps a Porche, but still, an F1 does 0-200kph in
While I'm at the subject. As we have bochs, it would still be possible to run Windows in a VM, no matter what platform we use, so all M$ users could be happy, or do as ACorn did (does), have a PC as a extension card, i.e. run a PC natively in a window, just used the *fast* RISC CPU for any real work.
WRONG! RISC "ordinary computers" exist! (Score:3, Informative)
You wrote "why don't we find them in our ordinary computers"!
In fact I am using one as I type this. It was built in 1996 (yes nineteen ninety six) and has a 800 Mhz G4 accelerator in it from Sonnet.
Its my "internet" machine, I use other RISC machines for programming not wired to any external networks.
It runs a wonderful version of Microsoft Office at full speed (RISC) and launches MS word in 2 seconds cold. (yes two seconds to flashing cursor).
no intel emulation needed.
its called a Macintosh
millions of macs exist and millions of macs use one or more risc processors and almost no mac people I know ever wnat to emulate a pc running windows EVER if they can help it.
RC5 and other benchmarks are twice as fast on standard macs than AMD, and Pentium 4s have no multi-cpu board designs...
If you want to run thousands of high end commercial shrink wrapped products in RISC you can, but only on macintosh. And they run very well in the new Jaguar 10.2 (though faster in 8.6).
Re:WRONG! RISC "ordinary computers" exist! (Score:2, Informative)
I personally own an iBook, and a comparable Dell was really about the same price. I agree that the dual G4's are a bit pricy but look at the prices of a nice Dual Proc Dell workstation fully equipped and then we'll talk again. Oh, and then don't forget that Macs last longer.
Always compare prices of Apple computers to Dells, Compaq's, etc. Don't start with the idea: "I can build something better cheaper", I know that, you know that, but it's a different market.
Re:WRONG! RISC "ordinary computers" exist! (Score:2)
* I know what PC means, but I used in such a way that I though that it was clear what I was saying. Please do not use spelling misstakes and such as an argument (it often happens at
* The BSD license lets them take code and do what ever they want with it, but that does not make it a *good thing*.
* Generic computers now - yes, but I said that they had a history of doing things their way (which generally yeilds more expensive hardware).
* High pricing - YES. I'd say what you show is expensive, even compared to a Dell or a Compaq. I do not see apperance as a reason for buying a computer (I have mine in a closet). Concerning Macs lasting longer, could it be because the development of new models is slower?
Re:WRONG! RISC "ordinary computers" exist! (Score:2, Informative)
Perhaps model development is slower, but I don't think it is that much an issue.
Re:WRONG! RISC "ordinary computers" exist! (Score:2)
I still don't want an IBM stationary. I can use their laptops as laptops are bound to have quirks anyway. I'm just worried since (to my knowledge) there are no custom Macs (i.e. just one manufacturer of boards).
I live in Sweden. I just poped by www.komplett.se and picked one of their standard computers:
Box: AVANTECH Medium Tower - Skruvlöst Kabinett m/300W
Processor: AMD Athlon XP2100+ 1.733 GHz 266 MHz bus - Socket A (Palomino) processor
RAM: DDR-DIMM PC2100 256MB DDR
Motherboard: MSI KT3 ULTRA2B Moderkort Socket A VIAKT333, ATA/133, ljud, ATX, USB2.0
HDD: IBM Deskstar 80GB IDE 7200RPM - ATA/100 120GXP
Graphics card: Asus V8420 GeForce4 Ti4200 64MB DDR. - AGP, (V8420/TD) DVI, Tv-Out, Retail.
CD-Burner/CD-reader: Asus CD-brännare IDE 40x/12x/48x CRW-4012A, Intern (FlextraLink)
DVD-Player: Asus DVD -spelare IDE 16x/48x (DVD-E616)
Soundcard: Soundblaster compatible
Speakers: Creative Högtalare SBS250 2 active speakers, White box
Network card: CNet Kort 10/100 Mbps PCI - TP only Davicom Chipset
FDD: Nec 1,44MB
Screen: Hansol 19" CRT 920P TCO-99
Keyboard, Mouse & Mousepad
Microsoft Windows XP Home (Svensk)
3 years warranty and free telephonesupport
This for only 12999SEK (around 1275Euro). This gives me a few hundreds to play with to get the Movie Studio and Works.
As for selecting extras for the PC to make it as good as the Mac. I had the same discussion with an Atari owner a few years back (I too am an Atari owner and user). He claimed that it was cheap as an extra MIDI interface would cost so and so much for a competing brand.
Re:WRONG! RISC "ordinary computers" exist! (Score:2)
I like the new macs, but I feel that they are more expensive that PCs^H^H^Hx86s.
What I wanted to say was that the mainstream computer aught to be a RISC machine. If the platform is a Mac, that is OK. The problem is that finder is propetary and that there are no (or few) producers of hardware except Apple.
Re:WRONG! RISC "ordinary computers" exist! (Score:2)
Why not? Nobody with any intelligence buys name-brand desktops. Would you pay more money to get inferior components? I wouldn't. Want a dual Athlon MP 1900+ with decent amounts of memory & disk, a decent video card, and Win2K for somewhere around $1600? Dude, you're not getting a Dell!
Re:WRONG! RISC "ordinary computers" exist! (Score:2)
Re:WRONG! RISC "ordinary computers" exist! (Score:2)
Additionally, they also work on GPL code (gcc being the most known) and of course also give back things there (even if they didn't want to, they'd have to). The result is that gcc 3.x has much better PowerPC code generation, from which Linux and *BSD on PPC machines can benefit greatly as well.
Finally, they also opensourced several things they wrote from scratch, like their CSDA and ZeroConf (RendezVous) implementations.
Re:WRONG! RISC "ordinary computers" exist! (Score:2)
Where are the dual Socket-478 motherboards? They don't exist...if you want Intel MP, you get to fork over the big bucks for Xeons. Hope you brought some Vaseline...you'll need it.
Re:Benchmarks... (Score:2)
RISC is no panacea, there's no real reason why a RISC box is inheriently faster (in real world use) than a CISC one - they're just different architectures.
The real reason wintel is still CISC is not Windows itself (NT4 for example is already ported to Alpha) but all the third party apps - people want to be able to run the xyz app they bought 5 years ago on their new box. This is why Intel is having fun trying to get their new non-backwards compatible architecture accepted widely.
Oh and gcc isn't a Linux app, of course it was easy to recompile for other platforms. That's kind of the point of gnu.
Re:Benchmarks... (Score:2)
more backwards compatible than Itanium, but (Score:2)
IMHO Itanium just isn't the way to go. By some measure if X86 is warty, then Itanium most closely resembles Ben Grimm in his best orange. By other measures perhaps IA64 is a cleaner architecture, but it's proving to be a sonofagun to write compilers for. To me that portends a somewhat moribund future with a highly complex compiler on a highly complex architecture. Even incremental improvements, other than clock speed and cache size ramping will be difficult.
RISC and CISC now the same thing (Score:2)
Just take a look at any modern RISC processor. Chances are it has several hundred instructions, ie they sure haven't "reduced" that instruction set by any significant amount. Than if you look at any modern CISC processor, you'll find that they just decode instructions into RISC-like ops internally. End result? The difference between RISC and CISC is REAL small these days.
If you read about the design of the Power4 vs. the Athlon, you'll see that essentially ALL of the basic building blocks are the same, it's mainly just a matter of how many of those blocks there are and how they all fit together. If anyone thinks that the Power4 is so fast clock for clock vs. the Athlon is because of it's instruction set, they probably just haven't looked to see that this chip has tons of execution units, HUGE cache and a shitload of bandwidth. All things that could potentially be added to a chip like the Athlon if the economics of such would fit.
Now, this isn't to say that x86 isn't without it's flaws, but most of those flaws are rather minor and have been worked around in compilers for years. The two biggest problems are the small number of registers and the stack-based floating point units. Well, Intel's SSE2 can now mostly replace the old floating point unit for the majority of tasks (though it typically isn't used as such yet), and AMD's upcoming Hammer/Operaton will double the number of registers available.
Re:RISC and CISC now the same thing (Score:2)
Since x86s now days are RISCs with a CISC shell, why not simply remove that extra layer of complexity and simply introduce a plain RISC architecture.
If you want to know how *bad* the x86 is, simply try too boot of a floppy and enter protected mode. You enter the CPU in 16 bits mode, have to fiddle with some special reigster, make sure to take a jump and then you're in.
Re:RISC and CISC now the same thing (Score:2)
Re:Benchmarks... (Score:2)
Someone already pointed out that Macs use RISC CPUs but in fact all modern x86 chips are really RISC cores with a translation layer from x86->RISC. Also most compilers optimize to a very RISC-like subset of x86. So you see, x86 has managed to evovle so it has most of the advantages of RISC plus the all important legacy support. This sort of thing is how x86 has managed to survive so long and why that's not nececarily a bad thing.
Re:Benchmarks... (Score:2)
WinXP even has a compatibility mode to run older apps since the system has been so badly designed from the start.
As for the pricing. The prices would drop if major suppliers started supporting them. This is what I want: the major players (i.e. "the computer industry") should realize that it is time to make a platform switch before we dig our selves even deeper into this pit of horror (i.e. x86 architecture).
Re:Benchmarks... (Score:2)
And on the other side of the pond, Linux was in te beginning not intended to run on anything besides x86. It has turned out otherwise, and Linux now runs on quite a lot of platforms.
Re:Benchmarks... (Score:2)
If you can point me to a link for PPC or MIPS motherboards with PCI busses, which use AT or ATX power supplies, I'd be very happy.
You're just going to have to pay four times as much. x86 systems are cheap because millions of people buy them ...
There's some truth in that. I've found motherboards for Alpha's, but they cost over $1000, so were a little hard to justify. The problem was that the manufacturer didn't want his motherboards competing with his assembled machines, I think.
Alpha, MIPS and x86 I know about. PPC I hadn't heard about. Was that for IBM's RS6000 workstations? I don't think PPC support was still there by NT4.something.
Clawhammer (Score:5, Informative)
The workstation Sledgehammer (Opteron) has two 16 bit busses
The server Sledgehammer (Opteron) has three 16 bit busses
The spec results are as follows:
Spec_int
PIII1G 426
G4 1ghz 306
G5 937 (IBM PowerPC 970)
2.8Ghz p4 1010
XP 2800 933
Itanium 1Ghz 810
Power4 1300 804
Clawhammer 2.0 Ghz 1202
Spec_fp
PII 1Ghz 426
G4 1Ghz 187
2.8 Ghz p4 947
XP 2800 782
Itanium 1Ghz 1356
Power4 1300 1169
Clawhammer 2.0Ghz 1170
Opteron??? Higher than clawhammer considering the multiple hyper transport busses 1/2 mb L2 (compared to clawhammer's 256/512 l2) and dual on chip DDR memory controllers compared to Clawhammers single memory controller
Bootleg Powerpoint Presentation:
http://130.236.229.26/download/misc/AMD-Opteron
and
http://a26.lambo.student.liu.se/download/misc/A
Read the Show notes! AMD failed to edit them out
Filename is AMD-Opteron.ppt google search it.
Includes a system that is an Opteron workstation dualed with a clawhammer that still presents itself as a single proc system. The clawhammer acts as a math co-processer
Re:Clawhammer (Score:2)
Couple that with the fact that large parts of Mac OS X are AltiVec optimized (lots of functions from the standard C library like memcpy, the OpenGL framework, the CoreAudio framework,
I hope Hammer will fix the rc5 crippled speed!! (Score:5, Informative)
Currently, according to the RC5 benchmarks AMD is far slower than dual cpu macintoshes (half as fast). (source available for cor rc5 loops for most processors). RC5 was silently completed in June or so but a bug went unnoticed for a couple months, but the contest is over. They measured performance in units of "Mac poerbooks" in their press releases.
The Mac Dual 1 Ghz g4 is faster than all existing dual AMD motherboards in RC5 benchmark by almost 100%.
21,129,654 RC5 keyrate for dual 1 Ghz g4 system ! And Now apple sells dual 1.25 Ghz stock which would be even faster.
A dual 1800+ AMD MP gets only HALF as many as a Mac! 10,807,034 rc5 keys !
Funny "Mhz myth" there showing itself I guess... Apple now is selling even FASTER machines but with smaller caches and less fast read-write ram (it now uses DDR on newest boxes).
And the macs are using low power g4 chips meant for microcontroller usages with very little predictive branching and a simple 7 stage RISC pipeline depth. (macs complete many many instructions per cycle though, unlike Pentiums).
The mac I mentioned uses a 2 MB L3 cache and no AMD MP dual cpu boards I know about have any L3 cache at all, so maybe that is whay some common macs are over twice as fast, its not just altivec meager tweaks to rc5. AMS have similar , but less mazing vector ops.
Another reason the mac might be over twice as fast as an amd dual mp board is not just the 2MB l3 cache but the fact that mac can read and write to a cold page of memory simulatneously FASTER than any AMD MP designs which are biased for linear access and streaming. Many memory scatter benchmarks show this too. Appels newest DDR-RAM machines might not offer this feature though.
So basically, will the new Hammer systems be able to get close to speed for RC5 and other crypto tasks as the RISC based Powerpcs?
I really want to know. And I am so sad to see Slashdot reduced to fanboys modding down anything discussing tech subjects like this as "flames" all the damned time. This post is all informatinve and factual and my reason for asking is genuine.
http://www.research.ibm.com/journal/rd46-1.html has 5 LARGE technical articles on how the POWER4 chip was designed... in PDF form too. Even if you do not appreciate the Power4 (which apple is using a dual-core version of in many months) you might want to read these PDFs because they are all about chip design.
They put the floating point on the corners of the chip die to help spread heat, etc. Hundreds of interesting facts and pictures on at that site.
Top500.org lists Power3 dominating the cluster speeds of the top 500 computer clusters for memory+float speed. Power4 will soon start appearing in that list as well as the "lite" version with only 2 MB of cache instead of 4,6, and 16 MB.
Plus the new chip apple will start using announced yesterday, will have SIMD "VMX" or Velocity Engine added (Moto calls theirs"altivec").... only 90% of altivecs hundreds of opcodes will be offerred though.
With Pricewatch showing cheapest 800Mhz Itanium bare cpu at almost 8 THOUSAND dollars, and 3.5 thousand for the old itanium 700 Mhz, it does not take a financial genius to see why apple's workstations are selling so well nowadays.
Re:I hope Hammer will fix the rc5 crippled speed!! (Score:2)
Source?
Altivec is 162 instructions, and the Microprocessor forum brief on the GPUL stated "over 160 instructions"
Re:I hope Hammer will fix the rc5 crippled speed!! (Score:3, Insightful)
Re:I hope Hammer will fix the rc5 crippled speed!! (Score:4, Funny)
On the PC, all of my work was so slow. That repeated multiplication and division by powers of two took forever. That's why I got a Mac, which has great shift left / shift right performance! Now I have more time to ogle the secretaries from the water cooler.
I'm Colin Bayer, and I'm an accountant at Arthur Andersen.
Re:I hope Hammer will fix the rc5 crippled speed!! (Score:2)
Re:I hope Hammer will fix the rc5 crippled speed!! (Score:3, Funny)
[...] The Mac Dual 1 Ghz g4 is faster than all existing dual AMD motherboards in RC5 benchmark by almost 100%.
[..] Funny "Mhz myth" there showing itself I guess... Apple now is selling even FASTER machines [...]
I can see the new "Switch" ad now (white background, jerky cuts):
"I'm a network administrator and so are my friends" "We steal computer power from our employers, at school, wherever we can find it, to run this Are See Five thing"
"Peace, love, and strong crypto"
"So I noticed the Apple computers were pretty fast at kicking out keyblocks" "I had to have one"
"Say it with me: Brute-force known-plaintext attacks" "That's what makes a computer cool"
"If I'm going to spend a few thousand dollars on a computer, it's gotta be the best at at least one thing"
"Hi, I'm Anonymous Coward. I'm a crack user."
[Apple logo]
Cmon. The estimated SPECint numbers are wonderful news. They're a lot closer to reflecting what most of us do with these machines than key-agile stream ciphers. Beating up x86 weenies with the RC5 key rates will just make them buy a couple of $400 Athlons to stick in the closet and gloat about price/key/sec performance. (That's counting electricity too.)
Re:I hope Hammer will fix the rc5 crippled speed!! (Score:2)
JOhn
Re:I hope Hammer will fix the rc5 crippled speed!! (Score:4, Informative)
It must be clear that, if Intel had included an SSE2 rotate op, the P4 would easily beat a G4, not at the same clock speed, but given that a G4 can't scale as well as a P4 it wouldn't matter anyway.
Hammer can't get any better on RC5 without an instruction set overhaul. Athlons already do pipelined scalar integers rotates in 1 clock cycle, it's impossible to beat that.
Also, please do not generalize G4's distributed.net RC5 speed to a ``PowerPC superiority in crypto tasks,'' because it makes me want to laugh really hard at your cluelessness. SIMD is completely useless in real-world crypto applications: when you use a cypher in Output Feedback mode, which is how stuff is done in the real world when you're encrypting data instead of trying to break keys, you need to know the output of the last crypto operation to mix in the next operation. It should be obvious that you can't do operations in parallel now, so SIMD becomes useless and the Athlon goes back to being faster than the G4 at the same clock rate, and of course much faster on commercially available speed rates.
Oh, and the larger cache you mentioned has absolutely ZERO effect over RC5 performance. RC5 memory usage for each key being encrypted/decrypted is:
As you can see, even if you take into account loop control variables and whatever else, it boils down to less than 150 bytes per key. You could probably fit a 60-wide superscalar core on the P4's measly 8 KB L1 cache.
Re:I hope Hammer will fix the rc5 crippled speed!! (Score:2)
Re:I hope Hammer will fix the rc5 crippled speed!! (Score:3, Informative)
I'll admit I don't know Altivec too well. But I can pretty much guarantee you that a SIMD rotate instruction would be fairly handly on a reasonable number of crypto algorithms (RC6 and MARS come immediately to mind). Assuming it's doing what I figure it's doing based on your statement.
BTW, SIMD is useful in some crypto algorithms. In particular, I'm thinking of UMAC16, which was designed to be used with MMX or AltiVec. Yes, it most sitiations it's hard or impossible to run the high-level operations in parallel (though you can with Counter mode and when decrypting CBC -- they can both be done infinitely in parallel). And some algorithms do have operations internally that can be implemented with SIMD (mostly by design).
Re:I hope Hammer will fix the rc5 crippled speed!! (Score:2)
Anyways, rc5 is an extremely poor benchmark, as it only tests which processor runs the rc5 algorithm the best, which is quite limitied by nature anyways.
Windows XP (Score:5, Informative)
1) It's mostly written in c/c++
2) The HAL (Harware Abstraction Layer) contains most of the platform specific code. As I understand it the kernel does not actually handle the hardware directly
Ofcourse I can see it going like this:
1) Apple, Intel, AMD and Moterola put forward new Chip designs
2) They ask MS to support it with their OS
3) MS picks Intel
--
$vi any_article_on_iraq
:s/iraq/microsoft/gi
:s/Weapons of mass destruction/Windows/gi
:s/Axis of evil/Redmond/gi
:s/In this post september 11 climate/Service Pack 1/gi
:s/Bush/Linux/gi
:wq
Re:Windows XP (Score:2, Informative)
*Way* longer than that.
In late 1999, MS shipped a crippled 64-bit compiler in their platform SDK for syntax/portability verification. They began shipping a functional compiler and libraries six to nine months later. My then employers (a network card manufacturer) used to get weekly or fortnightly pre-release builds of Win2k and I'm fairly sure they had Itanium builds up to November 1999 or so - when they just stopped. We didn't have itanium hardware anyhow.
486dx4-160 (Score:5, Funny)
Computational Power (Score:3, Insightful)
If someone were to say to me, that the number of kids on computers today doing the things they do was not directly related to computational power, I wouldn't believe them. The more power, the further the abstraction from what computers really are underneath, hence the broader user base.
If my old computer that my mom uses were 100x as powerful, it would be smart enough to go look online as to why it's having errors printing, and I'd never have to venture out of my cave in the basement
Gartner (Score:2)
It's just mind boggling that people take them seriously...
Everyone, look AWAY from the clock speed. (Score:4, Interesting)
I think a better approach for the future are smaller less power hungry modular CPUs. We've all seen the evidence of the clusters that makeup super computers. What if all standard computers came with 4 CPUs that used the same power as the P4 today? What if, instead of buying a newer faster computer, you could add CPUs like expansion cards but, at a reasonable price?
Re:Everyone, look AWAY from the clock speed. (Score:2, Interesting)
That being said, your term "power" is heavily overloaded here... I'm sure you can put 4 G4 processors into a box and the total (electrical)power usage of the 4 G4s would be comparable (or less) than a P4. If you are talking about four processors that are basically 1/4 of the computational power of a P4 (so four of them equal a P4), some applications will still need higher 'power' so that they can finish in times comparable to today. To paraphrase an old saying, a process is only as fast as its slowest thread =)
Re:Everyone, look AWAY from the clock speed. (Score:3, Informative)
P4 hyperthreading will hopefully get people into threading. Athlon will have slick four way and eight way multiprocessing with hammer when it finally rolls out. Halfway to 2003. I'm a student so I won't be buying until it comes out... That's what you get for delaying to add palladium you bastards.
paradigm shift... (Score:4, Informative)
i believe intel has shifted its focus in the battle of the desktop cpus. while amd is just playing catch up, intel now is already looking at what consumers will benefit from. maybe intel has realized that the speed today is an overkill for majority of today's needs. they are just speeding up their chips to keep up with moore's law.
but look at their products, right now, they are focusing on making things smaller, lightweight, ultra low power consumption, low heat devices, integration. the future is not on desktop computers requiring very high speed cpu but mobile devices such as phones, pda, tablets, etc. intel will be a clear winner (if only i have humongous money so i can buy intel stocks at discount.)
they have good engineers that produce good results. right now, they are already producing better chipsets for their server product lines, maybe a few years, they will no longer rely on broadcom's serverworks.
they are also picking up on their storage chips. from all the raid controllers in the market, i hardly see a card that does not have an intel 960 i2o processor or their new ixp processors.
their network and communication is very dynamic. like introducing 10gigabit products today (even with the downturn of telecoms.) enabling encryption and decription at 10gb/s is no joke. maybe a few years from now, we will see intel as chips in those network gear from cisco, et al.
they are now focusing on wireless integration. few years from now, capacitors and resistors will be in a silicon chip. it is the future, and they are very lucky to realize that. when the economy recovers, intel will clearly be a winner.
and for the server, i would want to say this. i believe amd will produce good cpu. but that is just half of the story, amd is not emphasizing any good chipsets/system to come with it including support pci-x at 133mhz with hotplug slots, interleaved memory with chipkill(tm), good server management, good integration.
(as one who decides what to purchase in a server,) amd must make a lot of effort before i will take them seriously. their cpu is not enough for me to get their system, yet.
let's just wait and see, but i see that intel will always be a step ahead. now for amd, the challenge is to be at par or even be ahead of intel.
heh.. (Score:2)
I love it when people who never used prepentium systems try to talk like they did.. Everyone knows that a dx4 ran at 100mhz.
dx4-160 explained (Score:2)
Hammer delayed further? (Score:3, Interesting)
They're saying that Barton will be here 1Q03, Sledgehammer is due 1H03, but now ClawHammer may be delayed until 2H03!
Arghh. I thought the point was to do a 64 bit CPU without requiring an Itanium schedule...
Mobile needs to run cool. (Score:2)
Course it should also have a mode that burns through the case, but gets you those extra fragging frames on Q3
PPC is not a great example of RISC (Score:4, Informative)
Memory intensive benchmarks (Score:2)
The other big advantage most people seem to forget is the amount of memory addressing capability. Where I work, we have racks of Linux X86 servers with 6GB of memory each. While there are hacks to go beyond 4GB, it gets kind of ugly. With Opteron, addressing 6GB or more of memory is not a problem.
Also, with their Hypertransport bus and supporting multiple processors, the amount of memory scales with the number of CPUs.
-Aaron
Re:AMD sucks! (Score:2, Interesting)
I want lots of cache and extreme memory bandwidth. As CPUs are getting faster and faster, both the lack of cache and memory access are seriously limiting the performance of current PC architectures. Yet, not even Intel seems to be interested in improving those areas. In fact, with P4 Intel actually cut the amount of cache.
Re:AMD sucks! (Score:2, Insightful)
Re:AMD sucks! (Score:2)
Intel bought DEC? :)
Re:486 160 mhz? (History lane) (Score:5, Informative)
In terms of performance the fastest chip that fitted in a socket 3 was the Cyrix 5x86 120Mhz [pcguide.com], which (again speaking of integer performance) was equivalent of a P100.
Re:486 160 mhz? (History lane) (Score:2)
Re:486 160 mhz? (History lane) (Score:2)
I still have one of those kicking around on a Biostar 8433UUD...it's not currently installed in anything, though. For $350 (processor & motherboard) in late 1995/early 1996 (?), it was a deal. It outran a P5-133 Packard Bell at work (not too surprising, since the Packard Bell had no L2 cache and sh*tty onboard video vs. the #9 Motion 531 I had at home). The only downside was that the 40-MHz FSB of the 5x86-120 meant that the PCI bus had to be underclocked (to 26.67 MHz) to keep things stable. I suppose I could've tried running the processor at 133 MHz (4x33) instead of 120 (3x40), but slower access to the L2 cache would probably have made performance about the same.
Re:486 160 mhz? (History lane) (Score:2)
I had one for many years. I'd say it would compare favorably to a stock P75.
Re:Browsing (Score:2)
I did.
Re:None will be successful in MY house (Score:2)
Early models will be able to deactivate Pd, anyway. When it becomes hardwired, that's the day I start looking at Apple and ARM.
Re:486DX4-160 (Score:2)