Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel

Intel Itanium 2 Benchmarks 186

Pablo writes "Over at VR-Zone we saw some interesting benchmarks of the upcoming Intel Itanium 2 processor codenamed McKinley that is on schedule to be launched during second half of this year. With a faster 3MB on-die L3 cache, 6 instructions/cycle and 6.4GB/s of bandwidth, it is poised to perform at 1.5-2x of the current Itanium processor. There is an overview of how the Intel Itanium 2 at 1Ghz clock frequency will perform against the current Itanium 800Mhz and Sun's Ultra Sparc III RISC processor."
This discussion has been archived. No new comments can be posted.

Intel Itanium 2 Benchmarks

Comments Filter:
  • No benchmarks (Score:4, Informative)

    by KingKire64 ( 321470 ) on Wednesday May 29, 2002 @09:08AM (#3601394) Homepage Journal
    This is just a Marketing Piece put out by intel. All the "Benchmarks" are proposed Estimates. And why would a dinky website get a hold of something this "Big"? Dont know just questions.


    Mod Me down Please
    • Re:No benchmarks (Score:4, Insightful)

      by Merlin42 ( 148225 ) on Wednesday May 29, 2002 @09:31AM (#3601490)
      Personally I like how each page in the ppt presentations used a slightly different USIII ranging from 800 to 1050Mhz. Hmm smells like marketing picked out the best from a bunch of (simulated?) benchmarks. Everything was labeled 'simulated' or 'estimated'.

      Nothing to see here folks please move along.

      Well other than the slow death of competing high end architectures.

      Lets see here we have:
      SPARC ... still competitive I think
      Itanium ... taking over
      Alpha ... going the way of the dodo (im really sad about this)
      PA-RISC ... transitioning to Itanium
      MIPS ... never really liked them for big compute stuff, lets hope SGI can turn things around.
      Power4 ... still competitive in performance but AFAIK to get a high end system you need to give your first born to IBM. And, IMO not really designed for the HPC kind of stuff I'm interested in.
      Cray ... ? I've heard of some really cool stuff being developed, i'll believe it when I see it.

      What eles is out there? I haven't really been in the market for a high end system in a while, but it feels like the market is shrinking and soon Itanium will be "the choice," unless legacy support is a concern .
      • Re:No benchmarks (Score:2, Interesting)

        by Anonymous Coward
        The reason they compare to different Sparc is because not all benchmarks are available for the newest one (Sun is playing the game too and is only publishing benchmarks where it serves marketing).

        Note that Sun has been cheating on Spec, that's the only way they can make recent Sparcs look competitive. It's a shame that it will force every other vendor to cheat in the same way.
      • If you consider that every playstation and playstation 2 in the world uses it. They all use the MIPS III ISA (I know the ps2 does, not sure about the ps1, that might be MIPS II). I don't know if they are going to use it again for the PS3, but I would guess so. Something like 100 million playstations (psx+ps2) all use it. Crazy to think, but true.
        • The embedded market seems to still be competetive.Consoles would definately fall into this catagory. We do seem to be settling on a few ISA's in that arena ie MIPS, ARM, x86, and powerpc. But my understanding is that there are several implementations of each of these so that things are still interesting.And others in this catagory are not completely crushed, and probably won't be any time soon. What I fear (and maybe this is only FUD) is that intel will suck the high end workstation market into its fold and kill competetive innovation there.

          Even though intel controls the desktop market pretty tightly they do not control the high end and low end. IMO this is because inteligent people can take their time to make decisions either when designing an embedded device or spending tens-of-thousand$ or million$.
      • Power4 ... still competitive in performance but AFAIK to get a high end system you need to give your first born to IBM. And, IMO not really designed for the HPC kind of stuff I'm interested in

        Hmm, what makes the Power4 not suitable for your HPC needs. IBM seems to think it's more than fine since it's used in ASCI White (and anything else they sell for that purpose).
        • Ok, guess I'm a little out of the loop. In my rather limited experience I have only ever seen Power4's used for database/mainframe kind of stuff. For example, at a previous job we had an AIX box running the backup/tape robot machine and used Alphas for compute tasks. Most of the marketroid crap i have heard relating to power4 has had to do with reliability at the hardware level. IIRC they have multiple identical cpus that execute the same instruction stream and check the results, or is that only in the BIG mainframes?
          • "to power4 has had to do with reliability at the hardware level. IIRC they have multiple identical cpus that execute the same instruction stream and check the results, or is that only in the BIG mainframes?"

            Yes, that only applies to the really big iron, i.e. S/390 mainframes. The other boxen use redundant caches a la Sun's SPARC processors. Keep in mind that running N redundant processors is extremely expensive, not only putting the extra CPUs, but the 'magic' hardware required to compare every instruction executed by them.
      • No INO PA-RISC is not transitioning into Itanium, HP did try that but the performance loss was not acceptable. What I think might happen is that PA-RISC is abruptly killed and Itanium replaces it directly.

        MIPS is dead on the workstation/server scene, SGI went the Itanium way...MIPS is today almost only for embedded devices.

        SUN is building SMT into a variant of the next US generation, used for Quad machines and bigger.
        But remember the strength of a SUN SPARC machine is not the processors, which hasn't been cutting edge for many years, but the big picture (Overall performance) - the machines are extremely well build.

        But lately I've heard rumours about Compaq reconsidering the death of the Alpha - because of the possible future Itanium2 flop, even internal forces of the HP part of Compaq is reconsidering the death plans of PA-RISC.
        • I sure hope you are right about Alpha or PA-RISC.

          Has SGI really gone Itanium? They have waffled on a LOT of things for years now. They kind of went wintel then backed out, then kind of went x86 linux then backed out [sgi.com]. Are they planning any new ia-64 products? The 750 [sgi.com] is a legacy product and the Pro64 [sgi.com] compiler seems to be gone.
        • Re:No benchmarks (Score:1, Interesting)

          by Anonymous Coward
          "MIPS is dead on the workstation/server scene, SGI went the Itanium way...MIPS is today almost only for embedded devices."

          I'd love to know where people hear these kinds of things! I'd like to find the source and plug it good.

          MIPS is SGI's primary platform for their worstation and server product lines. They will shortly be releasing Itanium based servers running Linux but they have stated again and again that MIPS/IRIX and ITANTIUM/LINUX are seperate product lines. Some of SGI's troubles stem from the fact that Intel is 2+ years late brining Itanium to market, they bet the farm on somone elses vaporware instead of their own (H1 & H2).
          • It was an official statement I fell over a few years back, the plan is/was to migrate to Itanium .... the problem for MIPS is that SGI is almost the only costumer buying these wonderfull processors.
            The sales of MIPS processors for embedded devices is quite good....AMD Alchemy is also MIPS based (Current is MIPS 32, the next generation will be MIPS 64).


      • Power4 ... still competitive in performance but AFAIK to get a high end system you need to give your first born to IBM. And, IMO not really designed for the HPC kind of stuff I'm interested in.

        Huh??? What are you smoking? :) The HPC market is one of the primary markets for the power4, the other being big enterprise systems. In fact, CSC, the Finnish national supercomputer center is currently installing their new toy, a power4 machine which will have 512 processors when it's finished in september. FYI, that's 256 power4 chips, as one chip has 2 cpu cores. Anyway, the design consists of 16 fairly standard 32 cpu pSeries 690 refrigerator sized boxes. Currently I think they are connected with gigabit ethernet or something like that, but during the summer a proprietary IBM high speed interconnect will be installed. Total performance is estimated to be about 2.2 teraflops, more than 4 times faster than the old 540 cpu Cray T3E, and placing the computer among the fastest in Europe. Currently I think 6 nodes are operational...

        BRAG MODE ON
        And I have an account on that baby!!! *Drool* Wonder how many fps quake would get? ;-)
        BRAG MODE OFF
        No seriously, they naturally have a strict policy on what you are allowed to run on it. You have to fill out forms requesting cpu hours with project descriptions etc. etc. Anyway, my plan is to run ab initio calculations on it. Hopefully that is. They're having some serious problems, related to MPI, I think... Which has led to the fact that everyone is submitting the big jobs they planned to run on it to the old T3E, which is rapidly getting overloaded...:(
    • Re:No benchmarks (Score:4, Insightful)

      by T-Punkt ( 90023 ) on Wednesday May 29, 2002 @09:44AM (#3601554)
      Even the graph has been done by a marketing guy.
      They sorted the benchmark results in ascending order and the connected the data points of completely different and independet benchmarks by a line!

      What shall the line tell you? The faker the benchmark the better the results? Or
      "This is a line graph that doesn't make sense at all. But look: It shows an increase, increase is good, so Itanion 2 is good!"
    • Re:No benchmarks (Score:4, Informative)

      by pmz ( 462998 ) on Wednesday May 29, 2002 @09:58AM (#3601644) Homepage
      Absolutely. From one of the slides "All projections based on Intel estimates[emphasis mine]...using...workload testing at Intel[emphasis mine]."

      And, absolutely none of the benchmarks are substantiated with real data!!!

      Only a fool would accept any of this presentation as fact. An even bigger fool would use this presentation in a decision whether to buy Sun or Intel.
  • Re: (Score:2, Insightful)

    Comment removed based on user account deletion
    • Re:On schedule??? (Score:1, Interesting)

      by Anonymous Coward
      SledgeHammer won't compete with McKinley, but with McKinley's successor, Madison.

      Oh, good luck to AMD...
  • Yamhill? (Score:3, Insightful)

    by ultrabot ( 200914 ) on Wednesday May 29, 2002 @09:12AM (#3601411)
    But what's with all the stuff regarding MS urging Intel to use AMD's x86-64? Isn't the future of IA-64 rather bleak right now? Even HP apparently says that "market will decide" whether PA-RISC or IA-64 will be their future Unix platform... Which would not be the case if IA-64 was obviously superior.

    Well, this can only mean good for Linux...
    • X86-64 (Score:3, Insightful)

      by OS24Ever ( 245667 )
      x86-64 may be more of a desktop migration point, but there are still plenty of IA64 type applications waiting in the wings from Microsoft, IBM and others.

      There is always Linux64.

      Not to mention the fact that many a beowolf supercomputer would like to be designed on a Itanium 2. There is one at NCSA from IBM with 800 some IA64 chips. They're just waiting for the Itanium2.

      • Here is an honest question, because I don't know the answer and want to know. Are there any 64bit applications for windows 2000 advanced server limited edition on the market? A quick search on google didn't turn up much. Only a bunch of old press releases about 3rd parties working on applications for 64bit, but no actually applications that I can find.

        If anyone has first hand experience with 64bit applications on windows, please share your experience. I'm not trolling, just honestly curious about real world deployments of 64bit apps on windows.

        • You are right, for windows 2000.

          However, under Red Hat Linux 64 there are tons.

          And note the release date of Itanium 2. Right along side Windows XP. There are supposedly 64 bit versions of SQL waiting to release with .Net Server 64.

          Though honestly most people that ask me about 64 bit computing are Unix (Solaris, AIX, others) wanting to migrate to a less expensive hardware plaform running Linux to replace some lower end Sparc or Power3 boxes.

          Though working for IBM I tend to work hard at the Solaris conversion than the AIX ones ;)
          • Thanks for that info. What type of 64bit applications are being ported, if you don't mind answering?
            • From what we have been told at this point SQL is the main focus to compete with those high dollar installs of Oracle. I've not heard about Exchange but it can't be very far behind.

              But SQL is the main focus to start the new major cash cow to compete with Oracle.

              Keep in mind, I work for a hardware vendor though. They don't tell us everything.
        • I do have experience running NT on an alpha.

          There's not much to comment on. It worked, slowly. There weren't many apps. Tools were very expensive.

          I switched it to linux way, way back in the 2.0.xx era. Since then it has acquired my personal uptime record of 350+ days while working 24/7 as home-office smtp/http/squid/ftp/firewall/etc. box. It's as solid as a rock and completely invulnerable to intel-style buffer overflow exploits and other arbitrary(x86) code execution attacks. I get a hit every other day from some skiddie that doesn't know what an alpha is or is not.

    • Re:Yamhill? (Score:1, Insightful)

      by morbid ( 4258 )
      M$'s influence is negligible on all but the tiniest (1-4 way 32-bit) servers. So, on larger servers, you're talking about real OSes and huge applications for which 64-bit versions already exist (have for the best part of 10 years). To port to a new 64-bit arch. with modern compilers and libs, it's not much more difficult than a recompile. So itanic may make some inroads into the server space, but only amongst those who buy into the intel brand name and the hype.
      • M$'s influence is negligible on all but the tiniest (1-4 way 32-bit) servers. So, on larger servers, you're talking about real OSes and huge applications for which 64-bit versions already exist (have for the best part of 10 years).

        I don't know what you mean by a 'real OS' however having worked at the extreeme high end I can tell you that there are plenty of real fast machines with O/S that are in most respects (except performance) pure junk. And by high end I mean that some of the machines I worked on 10 years ago still outperform top end desktops.

        The fact is that high performance machines are usually bought for fairly narrow purposes and as a result tend not to need a lot of an operating system. Back in the early 1990s an awfull lot of computationally intensive work was still being run on IBM mainframes running MVS and JCL which in many respects is not a whole lot better than MSDOS but you would never get the people who ran those piles of junk to admit it.

        If you want high performance from a multiprocessor machine the architecture of the O/S does matter a lot, but may not determine the outcome. consider this analogy, when it comes to writing an optimizing compiler modern languages such as Eifel or Java give the compiler writer a heck of a lot more help than a language like Fortran. However until very recently many of the top benchmarks for compiled code were for Fortran compilers simply because brute effort could be used to compensate for poor architecture.

        When it comes to the 'architectural features' required to make a multiprocessor machine work fast the cards are all with Microsoft. WNT was designed to work well on multiprocessor platforms and the design team were mainly DEC ex-VMS people who had a lot of experience in that area.

        UNIX was originally architected for a UNIprocessor and there are a lot of design decisions that you just would not take if making it easy to run fast on multiprocessors was your objective. On the other hand this matters less than it might because there has been a lot of work since on compensating. The cost being that to make an SMP system work well under 'UNIX' often means having to use features that are proprietary.

        The claim that UNIX has a better architecture than WNT is essentially as unprovable as the claim that vi is better than emacs. There are certainly still people in the computing world whose only experience of editing programs is with vi in line mode but will nevertheless post many gigabytes worth of posts to USEnet arguing the point. Having actually worked on O/S design, having used 20 odd O/S and having done system level programming on 4 (including UNIX) I can tell you that UNIX certainly did not get where it is on the strength of design merit alone. Many of the internals of UNIX are as confused and obfuscated as the syntax of the csh.

        To port to a new 64-bit arch. with modern compilers and libs, it's not much more difficult than a recompile.

        If you have nice 64 bit clean code that may be the case. The problem is that most people don't start from good code and even if they have tried to keep the code base clean they may not have succeeded. But remember that WNT has already been ported to a 64 bit architecture (Alpha) and although WNT is no longer supported on Alpha you have to believe that the compiler rules are still in place to detect code that is not 64 bit clean.

        The issue that is probably more important for Microsoft is how .NET performs on itanium. With .NET they have the ability in theory to take applications and compile them for any architecture at installation time. I don't think there can be much doubt that this architecture is designed to support Itanium.

    • Re:Yamhill? (Score:1, Informative)

      by Anonymous Coward
      No, HP clearly said that IPF (the official name for IA64 now) is going to completely replace PA-RISC / Alpha / Mips in the next 5-10 years.

      The market has decided already. It voted for "cheaper", and even HP can't spend the money to keep those other processors competitive without pricing its machines out of the market.

      I don't give more than 5 years to Sparc
    • I think IA-64 should been seen as a long term investment for Intel, it's a good piece of technology (well, at least it was expensive) and it's just not up for prime yet.

      x86-64 on the other hand, is a good solution for today's demand.

      So who will have an upper hand 5 years from now? probably intel, but who knows, Ghz sells.
    • Umm, can you plese point to where HP said that?

      If HP has been consitent on one front, its that PA-RISC is being phased out and IA-64 phased in. The question is how fast, and how long will the be able to milk the PA-RISC for all the support. (But they're not even betting on that too much it seems, since the IA-64 HP-UX comes with PA-RISC binary emulation -- its really more of runtime translation than emulation -- you get about 80% orignal speed, pretty nifty :) )

  • Yikes! (Score:2, Interesting)

    by delta407 ( 518868 )

    With a faster 3MB on-die L3 cache, 6 instructions/cycle and 6.4GB/s of bandwidth

    Not to mention 130 watts of power consumption. And you thought Athlons were hard to cool!

    • According to Sun's website, the UltraSparc-III 900MHz Cu chip only has a dissipation of 65 watts.
    • At the gym, I can pretty much generate 200 watts on a bike for a long time. When the chips start requiring around the 300 watt range I will be totally dependant on the power company. I actually already am, but that is simply because of the wife's hair dryer.
    • Not to mention 130 watts of power consumption. And you thought Athlons were hard to cool!

      Its not pretty. The 4-CPU Itanium 1 systems are composed almost entirely of fans on the front to create a sort-of wind tunnel for the damn CPUs.
      When you turn the machine on, there's so much interference on fan spinup that all the monitors around you degauss :)

  • by johnjones ( 14274 ) on Wednesday May 29, 2002 @09:23AM (#3601455) Homepage Journal
    well I'm sorry but all the benchmarks seem to be cache hitters and so run pretty damn fast

    real systems are about BANDWIDTH

    memory bandwidth/latency is the reason AMD killed the P4 in benchmarks

    lets see INTEL go up aganst a SUN on a large oracle DB then I will take notice

    really this is where SUN make their money

    regards

    john jones

    • lets see INTEL go up aganst a SUN on a large oracle DB then I will take notice

      Actually, Intel systems do pretty well [tpc.org], indeed, better than Sun running Oracle with a 3000G test database. And they do a good job [tpc.org] on transaction throughput too.
    • Well the USIII has lots of cache, so fair comparison. And the Itanium 2 had more than 3X the memory bandwidth vs the USIII. The AMD also suffers from a bandwidth perspective V P4, as up until DDR 333 even PC-800 RDRAM was faster and now they have the PC-1600 RDRAM ready to go. Also the TPC-C is not a synthetic benchmark, it is fairly reliable test of complex DB systems. I am far from an Intel fanboy (I personally hate em for their business practices and the fact that they rely more on marketing then engineering) but I think that in many ways the IA64 is a much better solution than x86-64, bolting on yet more extensions to a crappy core ISA does not make a good ISA.
      • RDRAM is now at PC1066 not PC1600 ie: they moved the FSB from 400 to 533. DDR is on the way out as well soon, its not vapour, there are chipsets available, will be interesting to see how that shapes up, other problem with RDRAM is the latency and cost or production, the traces have to be very exact due to the high speed the bus runs at.
      • Yeah, instead of extending a crappy ISA, with IA64 Intel managed to design a crappy ISA from scratch :-).

        Actually Hammer's long mode is a significant cleanup of the x86 instruction set. A number of rarely-used instructions and architectural features (e.g., segments) are removed, new general purpose registers added, various other features regularized (e.g., you can address the low 8 bits of every single register), and you can even ignore the x87 nightmare and use SSE2 as a clean floating point architecture ... it's faster than x87 too. (GCC can target SSE2 instead of x87 now!)

        I just like to say "AMD's Hammer will crush Itanium" :-).
    • memory bandwidth/latency is the reason AMD killed the P4 in benchmarks

      There's definitely some confusion here. P4 (with Rambus) has much better bandwith than Athlon. On any memory bandwidth benchmark (e.g. Sisfot Sandra) the slowest P4 is faster than the fastest Athlon. The 533 MHz bus version of P4 does 4.26 GB/s, while the Athlon only does up to 2.4 GB/s or 2.7 GB/s. Latency-wise they're pretty much equivalent. They're the same using DDR, and 400 MHz Rambus is slower than DDR, and 533 MHz Rambus is faster than Rambus. Of course, for most applications, latency is more important than bandwidth which is why they're closer in overall performance than in memory bandwidth performance.
    • We should also remember that companies buy Sun for reasons beyond speed. Sun has legendary support, when something goes wrong, or when they make a mistake, it gets fixed == FAST.

      Companies buy Sun because if for some awful reason a processor in an E10k dies they don't have to shut the machine down. The machine can be opened, still running, and the processors hot swapped.

      The fact is, Sun machines are really really dependable, and thats what companies pay so much more for.
      • Not only can you hot swap cpu's etc. The latest SunOS^H^H^H^Holaris can be upgraded or patched without reboot!
      • Sure Sun fixes problems fast. This one only got fixed if you had pull and screamed loud.

        Sun did change the error message in later versions of Solaris 8.

        I know a large company (you would know the name) who's production R3 system was off line for more than a day, mid week, because of this (crashed, and that trigered the problem that kept them down, thanks for the sucky support, EMC!).

  • by NetRanger ( 5584 ) on Wednesday May 29, 2002 @09:27AM (#3601476) Homepage
    I'm glad Intel is really optimistic about their processor. Now they need to either deliver the goods ... or just add more L1 cache and two more direct memory functions and voila , Pentium 5.

    It seems that in comparison to finding ways to rev up the clock speed, PC-based innovation in processors has stagnated -- at least as far as those innovations that actually reach the market.

    Perhaps I'm just picky.

    • Eh... how, exactly, do you manage to get from Itanium to Pentium5 by just adding some cache? I also don't get your second point; the idea of how changing to a completely new architecture (including the ISA) is comparable to simply ramping up the clock speed. I suppose you must be referring to changes between Itanium and Itanium2, but do you seriously suggest that Intel should introduce new radical changes in between generations of basically the same CPU?

      Sure, you could introduce compatible stuff like HyperThreading, but I believe this is scheduled for the generation after Itanium2 (or the one after that). Nobody should really expect the Itanium2 to be much more than an incremental improvement over the Itanium. The Itanium has afterall always been regaded as sort of a developer's beta version of the architecture.

  • 18 posts and NO ONE has asked us to imagine a Beowulf cluster of these?

    seriously, on one hand it sure sounds like marketing hype but on the other hand what I want to know is how well it will benchmark on Doom III!
  • Who cares about how much faster this CPU will go? It doesn't matter that the CPU will perform 10 gazillion instructions per second or whatever because the I/O bus and Memory architecture are the bottlenecks.

    Until they get working implementations of a new I/O bus and a memory architecture that gets RAM bandwidth and latency up to a point where it can keep up with the CPU, this will continue to be nothing more than trivia.
    • Re:What about I/O? (Score:2, Informative)

      by Anonymous Coward
      Itanium2 still uses old and slow shared bus.

      In 4-way configuration, each CPU gets only 1.6GB/s shared I/O and memory bandwidth.
      UltraSparc III has 2.4GB/s memory bandwidth + some I/O bandwidth for each CPU.
      Sledgehammer will have 5.4GB/s memory bandwidth for each CPU
    • Uh, with a 4+ MB cache, the I/O bus and memory architecture aren't nearly as big of a hit on performance as they are on our piddly consumer processors with a mere 512KB of L2 cache. (And up until recently, 256KB was the norm!)

      Properly written applications that take advantage of the cache (think video encoders that apply multiple filters on content already in the cache, for one example) are going to scream on this architecture.
  • by A_Non_Moose ( 413034 ) on Wednesday May 29, 2002 @09:32AM (#3601495) Homepage Journal
    does it make the intraweb go faster?
  • Compilers (Score:2, Redundant)

    by Ted Maul ( 582118 )
    I gather that the Itanium philosophy is to transfer the complexity to the compiler. The question is, how good are the compilers now? At the moment, it looks like a real bastard of a job putting together a decent one for any Itanium series. That VLIW stuff looks like it needs to be spot on every time to get the performance (don't do 3 fp ops in a row).

    When I can run my C++ through an Itanium compiler and have it come out good, then I'll believe it. Benchmarks? Right.
    • I gather that the Itanium philosophy is to transfer the complexity to the compiler. The question is, how good are the compilers now? At the moment, it looks like a real bastard of a job putting together a decent one for any Itanium series. That VLIW stuff looks like it needs to be spot on every time to get the performance (don't do 3 fp ops in a row).

      Solid information content: 0
      Repetition of things heard elsewhere: 10
    • Re:Compilers (Score:3, Interesting)

      by BlueGecko ( 109058 )
      Actually, current compiler technology can probably optimize quite well for a VLIW architecture. The only catch is that for it to work properly, you will need to profile your code. But with that data, a good compiler that can figure out where to optimize shouldn't be that hard to write.

      What I really don't get, though, is why no one is focusing on using JITs with these. It strikes me that this is the ideal platform for a JIT, where it can recode parts of the program on-the-fly based on where the bottlenecks are and so forth. I mean, wasn't this the whole point of a just-in-time compiler in the first place? IBM's Java runtime can rival C++ in speed if it is allowed to run for a reasonable length of time, allowing the code to become truly optimized, and since Intel is targeting this thing in a server environment where applications will run for a similarly long length of time I fail to see why they aren't going that route. This has the additional benefit that as our understanding of how to optimize for VLIW improves, the programs do not need to be recompiled, but instead can immediately get the benefit. (I am fully aware that Itanium is supposed to do some of this type of optimization itself, but current specs are utter crap, to put it lightly.)

      Interestingly, Sun's MAJC architecture does exactly that, expecting that a JVM or similar virtual machine will run on top. I have no clue what happened to that chip, but it struck me that it had much better potential to kick ass than Intel's Itanium despite having similar designs precisely because it was designed for a JIT to be on top.
      • Re:Compilers (Score:3, Insightful)

        by roca ( 43122 )
        It's hard to generate decent code for the IA64 so building a good JIT for it requires a very large investment. Furthermore the JIT compiler would probably be quite slow so it would have to run longer or achieve larger speedups for it to pay off.

        Although a JIT would be able to discover and exploit behavior patterns that didn't show up until runtime (and therefore not exploitable by a static compiler), it's not a panacea. Lots of programs are unpredictable even down to the level of individual loop iterations. Such programs really need small branch penalties and hardware support for instruction reordering ... which IA64 doesn't have.
      • This kind of processor would be great for running OS software because you could compile your kernel/modules/XFree/KDE/postgress etc... with a majic +profile for profiling info, and recompile it with better performance later on. this would mean that diffrerent prople would have the kernel/db etc.. software optimised for exactly there needs not for a general peropus implementation.
    • You mean they spent a billion+ dollars developing this thing to transfer complexity to the compiler?
  • by Anonymous Coward

    I have had the chance to work with a McKinley box for a few months now, and it is with no doubt the fastest chip in the West, especially for some applications like public key crypto algorithms.

    Indeed, McKinley running at 1 GHz can do a 1024-bit private key operation in 0.2 milliseconds - something well beyond any other existing processor. For high-volume secure electronic transactions, McKinley rules.
    • especially for some applications like public key crypto algorithms.

      Oh, here we go again with the crypto.

      You fail to mention that crypto is the ONLY application where Itanium (1) was any good at -- it has a lot of shift units. Is McKinley going to be the same story? Great crypto performance, really crappy integer / everything else performance?
      McKinley had better be better than that, or its going to get the same lukewarm reception as the Itanium.

      And they do seem to be having the same difficulties of pushing the clockspeed to decent levels as they did with Itanium. It was only by the C-rev of the original Itanium (the last pre-production beta chip realease) that they achieved 800Mhz, while the original target, I believe, was over 1Ghz.

      Its sad, since we all thought McKinley, being desgined more by HP and less by Intel, would have good performance besides SSL.

    • > For high-volume secure electronic transactions,
      > McKinley rules.

      For the price of a McKinley you could buy a Pentium 4 and a pile of someone's crypto ASICs, and blow the McKinley away.
  • better parallelism (Score:3, Interesting)

    by Anonymous Coward on Wednesday May 29, 2002 @09:40AM (#3601534)
    Site is slightly slashdotted, and most of the data is in gifs. Here's a fact or 2:
    Intel's claimed specint2000 and specfp2000 are both about 1.75x the 800MHz itanium. And this with only a 25% clock speedup to 1GHz.

    They claim specint2000 is 1.3x Sun Ultrasparc3 1050MHz, and specfp is 2x.

    Unfortunately, there is no indication of what the frequency headroom/scalability might be. The main point of the pentium4 architecture is to scale to 4+GHz. Can we assume anything similar for the itanium?
    • I think the point of the Itanium's is to go wider not deeper. VLIW combined with the fact that modern CPU's can do most operations much faster than they can retrieve data means that you want to do a fetch on a large data/instruction packet and when it arrives decode it and execute it across many execution units including possibly taking both sides of branches. I think the ultimate goal is to make SMP pretty much transparant as you don't care if the instruction goes to an execution unit on one cpu or another.
    • The main point of the pentium4 architecture is to scale to 4+GHz. Can we assume anything similar for the itanium?

      I don't follow this too closely any more, but I would presume they'll get to 2+ GHz, maybe 3 GHz, but probably not 4 GHz. A 4x jump is a lot to ask for without some additional redesign, especially if you are talking 4-way SMP running at those rates.

      Given that they're claiming a 2x boost in SPECint2000 and SPECfp2000 from Itanium, on the same .18 micron process, the successor chips (Madison/Deerfield) on .13 micron should get them another 2-3x. Those are due sometime in 2003 I think.

      --LP
  • A lot of posts have noted that the article doesn't actually give benchmarks, and for various reasons the Itanium-2 may not perform as well as claimed. But consider if it does...

    Most scientific heavy-duty work, such as EDA (chip design, etc.) is done on Sun and HP stations running their brand of Unix. Now, if an Intel processor based station starts to perform better than a comparable Sun station, at a much lower price, PLUS you run Linux instead of SunOS or HP-UX, a solution that costs a fraction of a price, and you get the same or better performance- well, you now have a VERY good reason for companies to start using Linux based workstations.

    So cheer Intel and AMD on- because it's good for Linux! :-)

    • There are a lot of compiler problems with the
      Itanium/Itanium-2 , infact i think most of the development money went on producing good compiler algorithms for optimisation.

      This kind of processor would be great for running OS software because you could compile you kernel/modules/XFree/KDE/postgress etc... with a majic +profile for profiling info, and recompile it with better performance later on. this would mean that diffrerent prople would have the kernel/db software optimised for exactly there needs not for a general peropus implementation.
  • SPECint???? (Score:3, Informative)

    by maitas ( 98290 ) on Wednesday May 29, 2002 @10:09AM (#3601701) Homepage

    I like the part where they said that Itanium 2 has 2x the SPECint performance of the original Itanium, since they never published it!! The SPECint performance for Itanium was so bad that only published SPECfp data!
    It's just the same thing that happened when IBM published the SPECint/fp for POWER 4 processors. They only publish the data using 1 processor on the p690, so they run the hole SPEC benchmark suite un the 128MB SRAM cache memory, avoiding using regular DRAM. The easy way to see this is that they never published any SPECrate number, so to avoid showing that they don't scale as all processors start competing for the cache.
    Sun USIII 1050MHz is almost 54% faster that USII 750MHz, as anyone can check going to the SPEC page (Sun Blade 1000 Model 1750 against Sun Blade 2050), with a 40% clock speed-up (this 14% increment is due to the compiler). This is exaclty the same processor at a faster clock, while Itanium 2 has more cache and a different architecture that Itanium, so a 1.5x to 2x speedup is less than spectacular, I will said.
    For transaction processing, thay don't give any clue to show where they get the info from. While they expect to get the best OLTP number for 4-way systems, I don't think they will be able to surpass the AlphaServer ES45 MoDel 68/1000, which is by far the best 4-way system ever. What's worst, WLIW is know for been a poor performer for OLTP, and a great performer for floating-pont (that's why the only publisehd SPECfp!!). They never published any OLTP benchmark for Itanium (nor SAP, Peoplesoft, ORACLE, or even the raped PTC-C), so you can have an idea of how poor it is...
    As of today, Fijutso PrimePower with 128 SPARC processors is the faster OLTP server ever (both SAP and TPC-C numbers!), with IBM p690 a close second for TPC-C and Sun SF15K a close second for SAP SD2-tier. Intel never showed in this kind of performance numbers, and Itanium certainly won't (unless while they keep running Windows).
    • Re:SPECint???? (Score:1, Informative)

      by Anonymous Coward
      How could this post be moded up ?

      Go to spec.org, you'll find all kinds of SpecInt results for Itanium (aka Merced). Those for Itanium 2 (aka McKinley) will appear at launch.

      Sun USIII 1050Mhz Spec benchmark is the result of cheating on one test where the compiler has been taught to recognize the benchmark and apply a conversion.
  • Itanium 2? (Score:4, Funny)

    by sharkey ( 16670 ) on Wednesday May 29, 2002 @10:17AM (#3601751)
    The first Itanic sank ALREADY?
  • Similar article in today's NY Times citing the Intel publication. Their article highlights that no comparison is made to recent Pentium-Xeon performance improvements. The article also mentions AMD's Hammer and suggests directly that McKinley may not measure up to the hype when compared with other processors.

    The article also mentions that Jack Dongarra - keeper of the Linpack-based 500 fastest computer systems - now shows an Itanium-based cluster at the top of the heap.

    Unfortunately for programs that don't run out of the cache, there are three dimensions to computer system performance: processor, memory, and io.
    Intel marketing has successfully skewed the common perception to the detriment of a more balanced system viewpoint.
  • by Ilan Volow ( 539597 ) on Wednesday May 29, 2002 @10:58AM (#3601982) Homepage
    Previous McKinleys [schillerinstitute.org] haven't fared very well.
  • My chip runs faster than one gigaherz!

    No, seriously, it seems that initial releases of the Hammer will have 2X the clock frequency of the McKinleys. I hope Intel includes an "Opteron rating" into the names of the various models just to help us keep things straight!

  • From the slides, it looks like they intend to use this exclusively with DDR-200, at least around the launch. I think this is a wise move by Intel, and bad news for Rambus!
  • by Anonymous Coward
    Since when are Intel's own PowerPoint slides accepted as "benchmark numbers"? These are far too vague to be statistically meaningfull, especially considering they don't come from an independent source. I have no doubts that the Mcikley (Itanium 2) kicks major booty, but I'm rather disappointed in all the hoopla over what's basically a marketing presentation from Intel. Someone post independent benchmark numbers and let us all know about them...don't waste our time with Intel's or Sun's or AMD's own PowerPoint slides.
  • by Anonymous Coward
    CPU is unworkable or be changed, please resetting
  • I just think its funny that we have the official Intel (r) company logo for stories relating to Intel.... but we use the Borg Gates logo for anything relating to M$. Funny.
  • Could someone please tell me what exactly this chip is projected to be used for? What common applications deliver/generate more wealth to the chip's (or the PC system that this chip will be part of) purchaser compared to the 800MHz Celeron?
    What is the current advantage to buying this chip instead of an older chip that is 5-10% of the cost of these ultra-fast state-of-the-art CPUs?

    In October 2000 I bought several thousand dollars of Intel stock. That very night the stock price lost 40% of its value; going from $62 per share to the low $40s. It has never recovered and is currently in the upper $20s per share. What is this chip going to do to restore the value of Intel's stock?

    I'm serious. Please give me some of your Slashdot insight as to why anyone would want to buy this thing? Will the sales of this thing ever generate the funds needed to recoup the R&D investment (never mind generate enough excitement to actually boost the depressed stock price)?
    • I'm beginning to think I should start trading stock for a living. Every time I buy stock it drops in value. I figure maybe I should try to Sell-Short and then buy some. (not enough to cover it, but enough to cause the price to drop...).

      Or, maybe I should just invest heavily in Microsoft.

  • News? Fluff! (Score:1, Redundant)

    by MidKnight ( 19766 )
    Ooh! Look at the pretty Intel Marketing fluff piece! All those projected performance numbers that are "Under Embargo Until 12:01 AM EDT, May 29th, 2002". Give me a friggin break. Call me once they actually ship the processor, there's a proven MB/Backplane at a reasonable cost, and there's someone who will support me if/when it breaks.

    On second thought, don't even bother to call me then either. I can currently buy a Sun Enterprise 420R [sun.com] right now. What was the point of the story again?

    --Mid

  • A little something called remote maintenance. Opps, forgot. Remote maintenance that is possible on a dial-up modem. Opps again. Please change that to the ability to rebuild, boot, change eeprom settings, power off and on, remotely using a 9600baud modem if I had to.

    Faster is not always better. As a system admin with both NT and Unix systems, my goal is availability and managability first, savings second. Let's face it, I could rebuild a Sun Solaris box remotely with a Palm, a VT-100 emulator, and a cell modem from just about anywhere in the country if I had to.

    Why is that important? I can only speak for my company, but being able to do the Sun maintenance from the comfort of our homes/desks is very important to me and my staff. We have equipment in remote locations (over 100 miles away from the office), and not having to drive over there to rebuild a server or install patches saves $100 in expenses, plus takes only 20 minutes instead of all day.

    This is not a M$ bashing bit either. If we were using Linux on Intel we would have the same issues. What I need Intel to do is very simple, restore a serial console to the platform. Let me have access to the BIOS from the command line and during startup. Let me power on/power off equipment from the console port too.

    Yes...I can run a cardpunch machine too!!!

This restaurant was advertising breakfast any time. So I ordered french toast in the renaissance. - Steven Wright, comedian

Working...