Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Linux Software Technology

North America's Fastest Linux Cluster Constructed 325

Posted by CowboyNeal
from the where-was-the-lightning dept.
SeanAhern writes "LinuxWorld reports that 'A Linux cluster deployed at Lawrence Livermore National Laboratory and codenamed 'Thunder' yesterday delivered 19.94 teraflops of sustained performance, making it the most powerful computer in North America - and the second fastest on Earth.'" Thunder sports 4,096 Itanium 2 processors in 1,024 nodes, some big iron by any standard.
This discussion has been archived. No new comments can be posted.

North America's Fastest Linux Cluster Constructed

Comments Filter:
  • by Anonymous Coward on Thursday May 13, 2004 @09:55PM (#9147308)
    pineapple on a monkey.

    And you thought I was going to say something else...

  • by irokitt (663593) <archimandrites-iaurNO@SPAMyahoo.com> on Thursday May 13, 2004 @09:56PM (#9147310)
    But why did they use itanium processors? Were they acquiring parts before Opterons were availabel? Did they have a problem with Xeon processors? Or did they have too much cash lying around?
    • by MBCook (132727) <foobarsoft@foobarsoft.com> on Thursday May 13, 2004 @10:06PM (#9147409) Homepage
      I like the Opteron as much as the next guy and I'm no fan of the Itanic. But the fact is that for some types of calculations the Itanium can smoke Opterons. If you want the fastest, in many cases you want the Itanium. If you want the best value (which still performs quite close to the fastest), you want an Opteron. I don't remember which operations are better on which, so you'll have to look that up (or someone will reply with the answer).

      Depending on budget, price (I wouldn't be suprised if Intel cut them a sweet deal to get this cluster publicized to help our their product's sales), and other factors, the Itanium could have been a good choice.

      Especially if they were using software that had been designed for the Itanium (like they were replacing an older cluster) then they wouldn't have to port the software which would have saved real money.

      I'm not a fan of Intel lately, but the Itanium isn't overpriced garbage no matter what. That smacks of fanboyism. Interesting you didn't add G5s to your list, BTW.

      ALSO: Don't forget that the Itanium 2 was DESIGNED FOR big iron, while the Opteron was designed for servers and small iron. They can be used in other ways (you could run a web site off an Itanium 2), but the Itanium was designed for these kind of applications.

      • Since when is a 4-way system "big iron"?
      • by tap (18562) on Thursday May 13, 2004 @10:57PM (#9147734) Homepage
        Do you have any kind of benchmark where the Itanium smokes the Opteron? The Itanium does have a greater memory bandwidth, but not by a lot. If you look at the spec benchmarks, it can be faster on some of them, but not by a lot. However, the Itamium is a lot more expensive!

        Compared to a Xeon or AthlonMP cluster, the Itanium faired poorly in price/performance. The only reason to use Itaniums was if you needed 64 bits for more than 4GB of memory, or needed high single CPU performance for a pooly parallized application. (Of course if your application parallizes poorly, a cluster is probably a bad choice to begin with). Then Opterion came out and changed all that. It's 64 bits, it's fast, and it's a fraction of the price of the Itanium2.

        I just purchased a new Beowulf cluster. The decision was between Xeons vs Opterons. The Opterons had better price/performance, but the Xeons would fit in better with our existing Pentium3 Beowulf, other ia32 servers, and existing software. In the end, we went with Opterons. Itanium2 was never even in contention. Just one look at the price and performce of a Itanium2 system was all it took to cross it of the list.
        • by fupeg (653970) on Friday May 14, 2004 @12:17AM (#9148167)
          Try any from SPEC, for example [spec.org]. Maybe you're thinking about x86 because otherwise, the Itanium2 is way out of the Opteron's league (as well as price range, but that is besides the point.)
        • Check SpecFP benchmarks - Itanium2 smokes pretty much everything else. Reason? it was meant to be a fp monster from the beginning. Integer math is weak (Opterons kick Itanium on that pretty hard), but FP math, especially vector FP math is Itanium's selling point. Why do you think the vast majority of I2 sales were to scientific research groups? (check the target profile for SGI's I2 clusters - research and defense)
          • by tap (18562) on Friday May 14, 2004 @01:26AM (#9148455) Homepage
            Ok, checked them again. The best 1.5 GHz Itanium2 SPECfp2000 score is 2148 while the opteron 248 is 1691. That's 27% faster. I'd hardly call that smoked.

            The Opteron 248 is $670 on pricewatch, while the 1.5 GHz It2 is $5200! The motherboards are like $1400 vs $400.

            You have to keep in mind that this isn't a single machine, it's a cluster. You could take the money spent on an Itanium2 cluster, and buy an opteron cluster with five times as many processors. I am well aware that one does not get perfect scaling. But if you are running something on a cluster in the first place, I have a hard time imagining something that is faster with one fifth as many 27% faster processors. Yes, there are codes that would be faster on 1000 Itanium2 vs 5000 Opterons, but you would never runs these on cluster, because they would be faster still shared memory system.
            • by SuperQ (431) * on Friday May 14, 2004 @06:55AM (#9149596) Homepage
              the problem is not that you couldn't get the processors, the problem is scale.

              A system like this will use a high-speed interconnect, not gige. The popular choice right now is infiniband, and that stuff isn't cheap, and also has limits to the number of ports per IB switch. The system at LLNL has 4 procs per node, which reduces the number of IB switches involved. 5000 dual proc (you suggest 248 proc) machines would require 2500 IB ports, instead of 1024.

              now if you considered the opteron 848 ($1300), in 8proc nodes, that would be something to think about, reduce the number of IB ports in half, and be able to double the processors.

              the other consideration is also processor scale. the 27% per CPU is signifigant, because even with dual proc SMP, you loose some % of the CPU time. There was a posting on an article about how processors scale this way. I forget how the principle works.
      • I don't remember which operations are better on which, so you'll have to look that up (or someone will reply with the answer).

        Wow! What a great argument strategy! Let me try...

        I like slashdot as much as the next guy. But the fact is that CmdrTaco is an evil blood-sucking cyborg who kills a puppy for each and every slashdot subscriber. I don't remember where I found this irrefutable proof, so you'll have to look it up yourself (or someone will reply with it).
      • Itanium vs Opteron (Score:5, Insightful)

        by vlad_petric (94134) on Thursday May 13, 2004 @11:25PM (#9147894) Homepage
        Itanium's instruction set is actually a lot more geared towards scientific computing than server benchmarks. Scientific stuff usually is made of very regular code, that is quite easily schedullable by the compiler. Server stuff is generally memory-bound and very irregular, so the processor usually gets less than one instruction executed per cycle - bundling instructions (static schedulling by the compiler) is completely pointless.

        "Big Iron" is a very vague term - server benchmarks behave very differently than scientific computation as far as performance is concerned; if you don't believe me I can easily point you to a couple of research papers analyzing them.

        The humongous on-die caches makes the Itanium perform well on servers, and definitely not the instruction-set architecture. So "WAS DESIGNED FOR" is only 50% true.

    • Linux support (Score:3, Insightful)

      by linuxguy (98493)
      Intel provides excellent Linux support for Itanium. Also if you use the Intel compiler, which Lawrence Livermore does, you get considerable speed boost on Intel CPUs.

      See: http://www.llnl.gov/linux/linux_basics.html#compil ers [llnl.gov]

      Intel can afford to provide little niceties like this. Can AMD? I doubt it.

    • by Yenya (12004)
      The problems of Opteron against Itanium2 are:
      • You cannot order the bigger L2 cache (Itanium2 can have 6MB).
      • For "randomly branched" code you need as short pipeline as possible. This is the reason Athlon outperformed PentiumIV at the same clock speed. Now Itanium2 has 6-stage pipeline, while Opteron has 20-stage, IIRC.
      OTOH, for full performance you need _much_ finely-tuned compiler for VLIW CPUs such as Itanium2 than for a generic CISC or RISC CPU.
    • I am not an expert, but in general, Opteron seems to be targeted more for the workstation/server market than the supercomputer market. It's not like they really need x86 backwards-compatibility in the supercomputer field, so Opteron doesn't seem to be optimized for that market. I think Intel may have made IA-64 with supercomputers in mind than AMD did with x86-64.

      Some reps from SGI came to my LUG [golum.org] the other day, and talked about their clusters and supercomputers. The guy doing the Q&A said that he per
  • by krammit (540755) on Thursday May 13, 2004 @09:56PM (#9147315) Homepage
    ...who gets the electric bill.

    I cringe when I leave the A/C on for too long..
  • "Most" powerful (Score:5, Interesting)

    by Alomex (148003) on Thursday May 13, 2004 @09:56PM (#9147318) Homepage
    Look, any way you cut it the 100K computers Google is reputed to have is the most powerful Linux cluster anywhere in the world.
    • Re:"Most" powerful (Score:5, Insightful)

      by 0xC0FFEE (763100) on Thursday May 13, 2004 @10:04PM (#9147385)
      If google's cluster is interconnected via ethernet, there is a whole range of computational problems it can't tackle. If you want to simulate a spatial phenomenon with lot of things going back and forth in a volume, you're bound to have a _lot_ of communications. The cost of the interconnect system in those simulation systems is often a substantial proportion of the total cost of the installation.
      • Re:"Most" powerful (Score:3, Interesting)

        by Boone^ (151057)
        You're right, but this still only uses an off-the-shelf interconnect from Quadrics. Quadrics bills themselves as the "price/performance leader", not the performance leader.

        There are many purpose-built supercomputers coming up (like Sandia's Red Storm) that use custom yet pricy interconnects that end up smoking anything Quadrics can put together. Anytime your interconnect relies on a PCI-type bus, you take a latency penalty on each end. Real supercomputers access memory on other nodes directly, not through

        • Re:"Most" powerful (Score:5, Informative)

          by tap (18562) on Thursday May 13, 2004 @11:13PM (#9147824) Homepage
          I think you've got that backwards, Quadrics is the performance leading, not the price/performance leader. Myrinet, SCI, and Infiniband all beat it in price/performance. Quadrics is faster, and scales to more nodes than the others.

          According to Quadrics latest price list, the cards are $1200 each, $913 per port for a 64 node switch, and $185-$265 for a cable. That's $2300/node.

          Myrinet cards are $595, the switch is $400 per port for 64 nodes, and the cables are ~$50. That's $1050/node.

          Quadric's price for a 1024 node interconnect is $4,176,094. That's hardly chump change. The bandwith is about 10x higher than gigabit ethernet, and the latency about 100x lower.
    • Re:"Most" powerful (Score:5, Interesting)

      by irokitt (663593) <archimandrites-iaurNO@SPAMyahoo.com> on Thursday May 13, 2004 @10:06PM (#9147403)
      4,096 iTanium processors versus ~8,000 boxes sporting Pentium II, III, and 4 processors. But remember that the interest Google has is in disk access and redundacy, not complex mathematical computation. So it isn't configured as a 'supercomputer' per se.
      • I think the guesstimate for Google was more like 80,000 boxes sporting P2/P3/P4 processors. That's an order of magnitude difference. The Ethernet backplane may cause some latency issues for some types of problems but remember what it currently processes every minute of every day right now.

    • Re:"Most" powerful (Score:5, Insightful)

      by smitty45 (657682) on Thursday May 13, 2004 @10:18PM (#9147482)
      Powerful = fastest computation, not biggest. A roomfull of Chevettes do not make a Corvette.
      • Powerful = fastest computation

        Mod parent up! Now let's get out the amp meters.. let's compare these computers kilowatt to kilowatt.

        Then we can be quite literal about which computer has "more horsepower," as one kilowatt is about 1.34 horsepower. (-:
      • That a roomfull of Chevettes means that someone has an unhealthy obsession with Vauxhall's...

        Nothing is as whiney as a Vauxhall Chevette doing 125km/h down the motorway, knowing that it ain't gonna get any faster, except maybe on a slope.
    • I want to know which test reported 19.94. Certainly doesn't seem to mentioned anywhere.
    • I thought of Google too, but yeah since their network isn't built for sheer computational speed, I doubt it's anywhere near the fastest/most powerful system for many processing tasks.

      The NSA, on the other hand... I would guess that they have the most powerful cluster of machines in the world for breaking encryption. Though perhaps not as powerful as the article's supercomputer for other tasks.

      Plus there are undoubtedly several other highly classified supercomputers designed to chew on other problems.

  • by chickenrob (696532) on Thursday May 13, 2004 @09:57PM (#9147320) Homepage
    Is it fast enough to run all the latest spyware, adware, and viruses and not slow down your solitaire game?
  • Awesome! (Score:2, Funny)

    by haxeh (766837)
    That's amazing!

    Now we can... uhh... what are we supposed to do with that much power again?
    • Re:Awesome! (Score:5, Funny)

      by MrRuslan (767128) on Thursday May 13, 2004 @09:59PM (#9147349)
      It's all for reserved for Doom III on longhorn.
    • Obviously we must use it for something constructive, like calculating the next few hundred billion digits of pie or processing random white noise in space on the extremely unlikely chance we'll be listening to the exact right piece of the sky at the exact right moment and recognize the completely alien transmission as being something other than noise.

      Nah, seriously though, we'll use it to support the largest growing industry. That industry which powers and drives the human imagination and recognizes our g
    • You finally have the ability to really imagine a Beowulf cluster of those things. Rather than telling all the Slashdotters to imagine it for you!
  • but but but (Score:4, Funny)

    by Anonymous Coward on Thursday May 13, 2004 @09:59PM (#9147342)
    Can it run Windows?
  • by SuperBanana (662181) on Thursday May 13, 2004 @10:00PM (#9147355)

    LLNL built a supercomputer, and it's going to do things besides simulate nuclear weapons [llnl.gov]?

    Quick, someone ring Satan and ask how the sno-cones are.

    • by geek (5680) on Thursday May 13, 2004 @10:27PM (#9147530) Homepage
      I grew up in Livermore, the lab was some 500 yards from my bedroom window. They work on a lot more than nuke simulations, including alternate fuels (my brother in law was driving a hydrogen fuel car from the lab 10 years ago as a test), laser technology and about a million other things. Why is it people like you who hear "Nuke" rant on and on like biased little children and post inflamatory things like this?

      The lab is a GOOD thing damnit. Do you even know what nukes are? What nuclear research has done for us? Grow up man.
      • my brother in law was driving a hydrogen fuel car from the lab 10 years ago as a test)

        I think that speaks volumes as to the usefulness of LLNL's research. After all, it's been 10 years, and there are still no hydrogen-powered cars available for purchase by consumers. Furthermore, there is extremely little research needed in the area; hydrogen conversion kits were developed by numerous companies and individuals decades ago.

        Why no hydrogen cars? Well, it could have something to do with hydrogen being a

        • Re:LLNL's usefulness (Score:3, Informative)

          by slamb (119285) *
          Why no hydrogen cars? Well, it could have something to do with hydrogen being a net-loss fuel; it takes more energy to make than it provides.

          That's thermodynamics. It's true for any fuel. It's even true for oil and nuclear energy - the difference being only that the energy wasn't put in during our lifetime. (And in the case of nuclear, that the pre-existing energy is all but inexhaustible.)

      • Nuclear weapons are dinosaurs. They did their job from 1945 to 1991. Who are we going to nuke now? The North Koreans, who are proposing a peace treaty? Canada? Nukes are weapons of deterence. Osama isn't sitting in his cave thinking, "We shouldn't mess with the US, they might nuke us."

        • Which is why we haven't made nukes in 10 years and in fact have been REDUCING THE FUCKING ARESNAL IN A TREATY WITH RUSSIA. God you anti-nuke retards are so behind the fucking times. Get over it already.
  • Google Cache (Score:2, Informative)

    by nadolph (661727)
    http://www.google.ca/search?sourceid=navclient&ie= UTF-8&oe=UTF-8&q=cache:http%3A%2F%2Fwww%2Ellnl%2Eg ov%2Flinux%2Fthunder%2F
  • by MrRuslan (767128) on Thursday May 13, 2004 @10:02PM (#9147369)
    this thing should do doom 3 with a software renderer at a very playable 47 FPS...
  • vs google (Score:2, Interesting)

    by docl (601856)
    This is probably a stupid question, but would anyone care to explain how this is different than a really large cluster. For example, if people estimate google to approach 100K nodes, how does this compare?
    • Re:vs google (Score:5, Informative)

      by complete loony (663508) <Jeremy.Lakeman@g m a i l . c om> on Thursday May 13, 2004 @10:20PM (#9147495)
      Google have lots of little (in comparison only) jobs that have to process heaps of data, googles cluster(s) wouldn't perform well in the top 500 list since they don't concentrate on link speed, which is the main factor in performace for supercomputers, but on raw data processing power.

      The GFS article that appeared a while back said they used standard 100MBit ethernet, this is not going to get you a good score in any supercomputer benchmark.

    • by Anonymous Coward
      Google's cluster isn't a computational cluster.

      You have several types of clusters, each are designed to do a specific task, although you can easily mix-n-match for different purposes.

      1. Server clusters. Bunches of machines running together, providing services that compliment each other.

      For example you have a file server that is mirrored to another that is hooked up to a different part of a Lan/Wan backbone in order to improve service. Lot's of databases are clusters like this.

      2. High avaiblity clusters.
  • Finally... (Score:2, Funny)

    by Fry-kun (619632)
    ...I can back up my brain
  • by blackula (584329)
    ...there are basically three type of clusters: 1) shared nothing: in this, each computer is only connected to each other via simple IP network. no disks are shared. each machine serves part of data. these cluster doesn't work reliably when you have to aggregations. e.g. if one of the machine fails and you try to to "avg()" and if the data is spread across machines, the query would fail, since one of the machine is not available. most enterprise apps cannot work in this config without degradation. e.g. IBM
    • ever





    • Had to decide to reply to this or mod it down, decided to reply.

      That's a wildly inaccurate summary of the landscape of RDBMS clustering technology.

      Problem is, that's not what we are talking about here.

      So the answer to your question at this end is almost certainly "none of the above" or probably more correctly "some bits of all of the above". Functionally most of the kind of stuff you do here doesn't need shared concurrent access to the same data files however for simplicity of implementation they probably

    • by skdffff (140618)
      There are basically two types of clusters - HA (High Availability) and HPC (High Performance Computing). They both called "clusters" (what confuses some people) but designed for completely different purposes. You're talking about variations of first type while cluster in the article is HPC cluster.
    • Ed Note: Unless the author wishes to narrow his/her audience to a small subset of Slashdot users, standard formatting and non-cutesy sentence case is always appropriate.

      There are basically three type of clusters:

      1. Shared Nothing: In this, each computer is only connected to each other via simple IP network: no disks are shared. and each machine serves part of data. These cluster doesn't work reliably when you have to aggregations. For example, if one of the machine fails and you try to to "avg()" and i

  • Another Article (Score:4, Interesting)

    by Flashbck (739237) on Thursday May 13, 2004 @10:05PM (#9147392) Homepage
    And only 55 people [com.com] were needed to build it!
  • by Anonymous Coward
    19.94 teraflops??

    Gimmy something I can grasp; what's this in BogoMips?
  • by m1kesm1th (305697) on Thursday May 13, 2004 @10:06PM (#9147407)
    Also in completely unrelated news, Bill Gates announced the first fully installed test of Longhorn happened today.
  • by rco3 (198978) on Thursday May 13, 2004 @10:09PM (#9147423) Homepage
    Hey, with a Beowulf cluster of these, I can run Longhorn!

    OK, I'm done. Sorry. Mod away!
  • by Twid (67847) on Thursday May 13, 2004 @10:10PM (#9147427) Homepage
    If I calculate right, they are claiming an Rmax of 19.94 teraflops with 4096 processors.

    The Virginia Tech cluster for Apple had an Rmax of 10.28 teraflops with 2200 processors.

    So, the Itaninum 2 delivered 4.8 gigaflops per processor, the G5 delivered 4.6 gigaflops per processor.

    This seems like a pretty poor showing for Itanium 2, overall. It's a much hotter chip than the Opteron or the G5, so cooling and power costs are likely much higher than a comparable apple cluster. The Xserve G5 is also likely cheaper than a similarly equipped Itanium 2 server, given that the Itanium 2 is $1398 per chip on Pricewatch, and a dual processor Xserve G5 cluster node is $2,999 list. Even with 4 cpus in a single box, I think the Itanium 2 server would easily top $6,000.

    But anyway, good game to Lawrence Livermore. I'll be curious to see if Apple has another volley to fire before the top500 list closes for this round.

    • I love G5s, but Virginia Tech's cluster IMO can't say much until they get the G5 Xserves, because the PowerMac G5s don't have ECC memory. ECC is very important for such a large scale project that runs simulations where data is stored in RAM for any meaningful duration.
    • There's also the difference in the interconnects, that has a lot to do with the efficiency of the system as a whole.
      Lets see what the VTech system does with ECC RAM installed when some node's aren't double-checking other node's results.
    • And don't forget that the current round of G5's are currently almost a year old... and long due for an upgrade. I hope some other instituion builds a 1,500 G5 2.6 GHz cluster :) (Or something to that effect.)
    • I checked California Digital's site. Those servers are 4U behemoths. The Xserves are 1U. So the Xserves actually take up half the space (2U for 4 processors versus 4U for 4 processors) for roughly similar performance.

      Like I said, I'm surprised the Itanium 2's performance was so low, given that it's a newer architecture than the PowerPC 970.

    • I heard a presentation from VTech on why they selected the G5 over the Itanium (for scientific calculations, with lot of floating poing operations, both are faster than AMD chips; not a big problem for AMD, of course; how many of us need to simulate nuclear explosions in our desktops? well, at least until the next generation of strategy games, of course).

      At the time - this was a study done in July/Aug 2003, remember - the speed of the G5 and the Itanium2 were similar for the same clock speed (for scientifi
    • by prockcore (543967) on Thursday May 13, 2004 @10:53PM (#9147713)
      This seems like a pretty poor showing for Itanium 2, overall.

      It does? You know that clustered computing doesn't scale linearly. If virginia tech were to double the amount of processors used, they wouldn't double their performance.
      • by Anonymous Coward on Friday May 14, 2004 @12:18AM (#9148179)
        Actually, there's more to it than that. Virginia Tech's machine only gets ~55% of its peak performance, whereas Thunder gets 87%. Given that Thunder has twice as many processors, that's an EXCELLENT showing. Remember, the actual work that's going to run on Thunder won't scale anywhere near as well as the easily scaled LINPACK benchmark, so the performance gap between "benchmark" and "real world" will only get wider in practice.

        Thunder is an absolutely remarkable machine.
  • by Animats (122034) on Thursday May 13, 2004 @10:10PM (#9147430) Homepage
    "We sold the Inaniums! We sold the Inaniums!"
  • by Stevyn (691306) on Thursday May 13, 2004 @10:16PM (#9147469)
    yeah, that we know about. I remember the article on google a few weeks ago that made everyone think just what they hell they're running over there. I wouldn't be surprised if governments kept other supercomputing clusters secret. I don't mean anything tin-foil-hatish here, I'm just thinking that some governments have test facilities that they don't let the public know about.
    • Who but the Japanese or the British have the technology? All US national labs have them, + CIA, NSA, DOD, and several other Govt agencies. My guess is the US Govt has at least 35 supercomputers spread out all over the place. Thats a bunch of power. Most is used in research and weapons design. NSA for code making and breaking. They probably have several more on order.

      Treasury could use one to figure out the tax code, nothing else has worked.
  • Heat (Score:3, Funny)

    by Rick Zeman (15628) on Thursday May 13, 2004 @10:19PM (#9147485)
    4,096 Itanium 2 processors in 1,024 nodes

    So THAT'S what's causing our heat wave!
  • Wow (Score:2, Informative)

    by 0xC0FFEE (763100)
    Here's a picture: http://doc.quadrics.com/quadrics/QuadricsHome.nsf/ DisplayPages/3A912204F260613680256DD9005122C7
  • by callipygian-showsyst (631222) on Thursday May 13, 2004 @10:23PM (#9147512) Homepage
    Now you can't say you have the fastest "Thupercomputer" any more! You've been beat by Intel and Linux!
  • by Lord Kano (13027) on Thursday May 13, 2004 @10:28PM (#9147540) Homepage Journal
    that they didn't build this just to win 2 grand from distributed.net [slashdot.org].

  • by painehope (580569) on Thursday May 13, 2004 @10:33PM (#9147574)
    yes, they're hot as hell and eat power the way oprah eats twinkies, and yes Intel has made a poor handling of the Itanium line, but the Itanium architecture is very interesting, and is actually very appropriate for a HPC environment. Not the part of the HPC market that clusters dominate, but the segment that Cray, SGI, HP Alphaservers, etc. have traditionally dominated. The segment that doesn't give a shit about cooling, power consumption, or price-performance, but who just need to get the job done as quickly as possible.

    Some of the coolest features of the Itanium are also some of the reasons why a lot of people don't want to use it. The EPIC ISA, for example. It was designed ( along w/ the physical hardware ) to expose a lot of the internal workings of the processor to the user. But rather than recompile and re-optimize their code, people would rather bitch about migration. That's fine for workstations and servers, but in an HPC environment, you want the nifty features, you want to occasionally hand-tune code segments in assembler, etc.

    Anyways, I'm not a fanboy ( well, maybe an AMD and MIPS fanboy ), just wanted to get in a few honest points before everyone started shooting holes in the Itanic.
    • by slamb (119285) * on Friday May 14, 2004 @12:23AM (#9148207) Homepage
      Some of the coolest features of the Itanium are also some of the reasons why a lot of people don't want to use it. The EPIC ISA, for example. It was designed ( along w/ the physical hardware ) to expose a lot of the internal workings of the processor to the user. But rather than recompile and re-optimize their code, people would rather bitch about migration. That's fine for workstations and servers, but in an HPC environment, you want the nifty features, you want to occasionally hand-tune code segments in assembler, etc.

      I just coded some IA-64 assembly and from what I've seen, this comment is dead-on. They've got a lot of interesting features:

      • Speculation. The idea is to do memory fetches far in advantage to avoid waiting for the (much slower) memory system. You can do a LD.S operation that tells the machine something like "I might want the value from this memory address in a few instructions." It fetches it from memory, if it's in a good mood. If the address is paged out, it doesn't get it. (Instead, it sets a NaT (not a thing) bit to tell you nothing useful is there.) Later, you do a CHK.S. If it turns out that the speculative load fails, it jumps to some "recovery" code which gets it for real.
      • Lots of registers. 128 general-purpose 64-bit registers. Floating point registers. Some specialized ones, I think.
      • EPIC. (Explicitly Parallel Instruction Computing.) It has different types of instructions, aimed at different execution units. In the current incarnation, there are two sets of these in each processor. You give it bundles of three instructions, more broadly divided into groups. Instructions in a group don't depend on any earlier results calculated by the group, so they can be executed in parallel.
      • Rotating registers. This lets you make different iterations of the same loop work with different registers, to take advantage of EPIC more fully.
      • Predicated instructions. There are a bunch (16? 64? don't remember) of predicate bits, set by the CMP instruction and the like. Every instruction has an associated predicate. (p0 is hardcoded to true, so you normally don't notice.) So you can do conditional execution without jumping. More efficient, especially if it's just a few instructions that differ.

      If you just have a simple sequence of operations, each dependant on the one before, you can't really take advantage of these capabilities. (My code was like this. Even though performance wasn't my reason for writing assembly, it was a little disappointing that I couldn't play with the new toys.) If you're expecting these features to make Word start faster, you'll probably be disappointed.

      But if you're doing intensive computations in a tight loop, you can do amazing things. If you can get all the execution units working simultaneously, it will fly. And the features like rotating registers are designed to make that possible. You need a very good compiler or a very smart person to hand-tune it. You may need to recompile to tune if your memory latency changes (affecting how many iterations to run at once) or they come out with a new chip with more sets of execution units. But in a situation like this, none of that is a problem. They'll have applications designed to run as fast as possible on this machine. They may never be run anywhere else.

  • by watsondk (233901) on Thursday May 13, 2004 @10:34PM (#9147587) Homepage

    do they have the nerve to go after this cluster?

    afterall they are trying extortion by lawyer against other large Linux users
  • Big Iron? (Score:5, Funny)

    by nacturation (646836) <[nacturation] [at] [gmail.com]> on Thursday May 13, 2004 @10:52PM (#9147706) Journal
    Thunder sports 4,096 Itanium 2 processors in 1,024 nodes, some big iron by any standard.

    If the government gets a hold of that, we're going to need some big tinfoil...
  • Probably OT, but... (Score:4, Interesting)

    by Trogre (513942) on Thursday May 13, 2004 @10:55PM (#9147726) Homepage
    ... if you want a practically guided tour of LLNL, watch TRON sometime. They filmed it there (the science-lab live action stuff anyway).

  • by nighty5 (615965) on Friday May 14, 2004 @02:52AM (#9148745)
    $2,863,104 in license fees going SCO's way!

    I can see the investors now rubbing their 2 cents together....
  • by mrjb (547783) on Friday May 14, 2004 @05:53AM (#9149368)
    This [top500.org] is the official top 500 list of supercomputers (not updated yet although thunder is mentioned [top500.org] as '*possibly* the second-most powerful computing machine on the planet'). Linux moving up to second place (from fifth a bit ago, iirc), woohoo! Only one left to beat!

Life is a game. Money is how we keep score. -- Ted Turner