Become a fan of Slashdot on Facebook


Forgot your password?

Cray XT-3 Ships 260

anzha writes "Cray's XT-3 has shipped. Using AMD's Opteron processor, it scales to a total of 30,580 CPUs. The starting price is $2 million for a 200 processor system. One of its strongest advantages over the std linux cluster is that it has an excellent interconnect built by Cray. Sandia National Labs and Oak Ridge National Labs are among the very first customers. Read more here."
This discussion has been archived. No new comments can be posted.

Cray XT-3 Ships

Comments Filter:
  • Re:How big is it? (Score:4, Informative)

    by Anonymous Coward on Tuesday October 26, 2004 @03:41AM (#10629056)
    Dimensions (cabinet): H 80.50 in. (2045 mm) x W 22.50 in. (572 mm) x D 56.75 in. (1441 mm)

    Weight (maximum): 1529 lbs per cabinet (694 kg) ht ml
  • by Dancin_Santa ( 265275 ) <> on Tuesday October 26, 2004 @03:42AM (#10629058) Journal
    In this day and age of very fast computers and clusters built in our basements, there sometimes comes along a story that whispers of the computing age of days long past. Cray is one of those names that can drop a jaw just by the mere utteration of the name.

    The name is synonymous with speed and power and the unwillingness to cut corners in order to shave a few dollars off the final product. When you buy a Cray, you know you are getting top of the line hardware.

    It looks like Sandia wants to build the fastest supercomputer in the world by clustering a few of these monsters, and I have no doubt that they will. Looks like more fun articles about this in the future. :-D

    There are two prominent applications for these machines. The first is nuclear weapons simulation. Personally, I don't see the point to that. The other application is in weather prediction. By feeding in current weather variables into a well-written model, a supercomputer is able to predict to a large degree of accuracy the future weather. Such an application will always be welcome.

    I think I'm going to have to fire up the old ][e, the nostalgia is killing me!
  • by jmv ( 93421 ) on Tuesday October 26, 2004 @03:44AM (#10629071) Homepage
    Opterons beat the pants off the Pentium 4s in x87 (i.e. old) FPU operations. If you want to get good performance, you need SSE/SSE2. Both for AMD and Intel. For pure SSE, the Pentium 4s beat the Opterons mainly because of the clock speed, but for multi-processor systems, the hyper-transport and all more than makes up for that.
  • How big it is (Score:2, Informative)

    by commodoresloat ( 172735 ) on Tuesday October 26, 2004 @03:45AM (#10629075)
    from TFA -

    Dimensions (cabinet):

    H 80.50 in. (2045 mm) x W 22.50 in. (572 mm) x D 56.75 in. (1441 mm)

    Sorry to reply twice but I forgot this detail.
  • by commodoresloat ( 172735 ) on Tuesday October 26, 2004 @03:47AM (#10629082)
    You could just read on the spec page: Power: 14.8 kVA (14.5 kW) per cabinet. Circuit Requirement: 80 AMP at 200/208 VAC (3 Phase & Ground), 63 AMP at 400 VAC (3 Phase, Neutral & Ground) Cooling Requirement: Air Cooled, Air Flow: 3000 cfm (1.41 m3/s) Intake: bottom, Exhaust: top.
  • by the_2nd_coming ( 444906 ) on Tuesday October 26, 2004 @03:52AM (#10629096) Homepage
    X-serve clusters would be cheaper, but I think that Cray has the edge n the interconnect tech. So, you need massive bandwidth in the system, get the Cray. you need next best bandwidth at a low price, get the Xserve cluster.
  • by Shinobi ( 19308 ) on Tuesday October 26, 2004 @04:03AM (#10629122)
    No. The biggest competitor to the XT3 will be machines like the NEC SX-8, their own X1 family or the IBM p690's. They are all shared memory systems, while the Blue Gene family is not. And therein lies a whole world of difference.
  • Re:software (Score:5, Informative)

    by Coryoth ( 254751 ) on Tuesday October 26, 2004 @04:04AM (#10629124) Homepage Journal
    what kind of operation system runs on this beast?

    UNICOS is usually a safe bet. In this case the specs [] say UNICOS/lc, which is made up of "SUSE(TM) Linux(TM), Cray Catamount Microkernel, CRMS and SMW software"

    I'm not entirely clear how to interpet that, but I think it runs as follows: It runs the Catamount Microkernel as the kernel, and uses SUSE for everything else (so we have SUSE Linux, without the Linux - all of a sudden that GNU/Linux stuff starts to make sense). The CRMS is their interconnect management and monitoring software, and SMW is the System Management Workstation - which I'm guessing is their administration frontend.

    It's worth noting that that's some pretty serious software there (because Cray has a lot of experience dealing with large systems) - you can bet that the management and monitoring software is some very serious stuff.

    This thing is to a beowulf cluster what a dual G5 PowerMac is to homebuilt PC system running Linux From Scratch. It's going to work flawlessly "out of the box" with a smooth and polished interface that lets you get done everything you want to do simply and easily. You can of course make your home built PC with LFS work just as well, it's just going to take you an awful lot of effort.

  • by Coryoth ( 254751 ) on Tuesday October 26, 2004 @04:16AM (#10629161) Homepage Journal
    So, how does this compare to running Apple's Xserve? Bang per buck? Heat? Space? Etc etc....

    There's not a lot to compare. We're talking apples and oranges. It's like asking to compare a PowerMac G5 with a bunch of PC parts scattered on the floor as desktop machines. Sure, you can put the PC together, load it with Linux, tinker with it to get everything working, etc. but that's a fair amount of work compared taking the PowerMac out of the box, plugging it in, turning it on, and having everything work perfectly.

    Read the specs [], particularly with regard to the interconnect, system administration, and hardware and software reliability features. This thing is seriously engineered to be massively parallel system with top of the line hardware and software to support and maintain that, as well as extremely impressive reliability features.

  • by Anonymous Coward on Tuesday October 26, 2004 @04:35AM (#10629208)
    It doesnt work that way because the applications this thing will be running will be highly specialised for the architecture.

    The algorithm will decompose the workspace so as each node can compute its own slice of the problem almost independently.

    The is only a slight communicatiosn overhead depending on what application is run. This is also offset a bit by the supposedly fast interconnect hardware.
  • by Big Mark ( 575945 ) on Tuesday October 26, 2004 @04:39AM (#10629221)
    If Crays were built the same was as desktop dual-proc machines, then yes, the multi CPU overhead would cripple it. Fortunately, it's designed completely differently - e.g. they use PowerPC chips to handle almost all of the inter-processor communication.

    You can't really compare something that can hold thousands of CPUs to something powered by Abit that can hold two, anyway. It's like comparing apples and a strange bug thing with tentacles.
  • by Anonymous Coward on Tuesday October 26, 2004 @05:46AM (#10629375)
    it is quite possible for kVA != kW, it all depends on the relative phase of the current and the voltage...

    14.5kW would just be the real component of power, i.e. heat dissipated
    this leaves - using pythagoras' theorem - an imaginary component of power of 2.96kVA
  • by wronskyMan ( 676763 ) on Tuesday October 26, 2004 @05:46AM (#10629376)
    Disclaimer: IANACEBIATAPEC (I Am Not A Cray Engineer But I Am Taking A Power Engineering Course)
    It's fairly common to get a KVA !=KW.
    Overall power used by a load is expressed as S=P+jQ, where P is the "real" power and Q is the reactive power (capacitive/inductive from motors, fluorescent lamp ballasts, etc).

    While the "units" of S, P, and Q are power=voltage*current, S is generally expressed in VA, P in W, and Q in VAR(volt-ampere reactive) to differentiate the variables. Because the magnitude of S=sqrt(P^2+Q^2), S will always be greater than or equal to P (in this case, 14.8kVA=sqrt((14.5kW)^2+(+-2.965kVAR)^2)
  • by joib ( 70841 ) on Tuesday October 26, 2004 @05:59AM (#10629406)

    There are two prominent applications for these machines. The first is nuclear weapons simulation. Personally, I don't see the point to that. The other application is in weather prediction.

    Oh, please. Buy a clue, will ya? There's lots and lots and lots of applications that use supercomputers, or could use if they were more affordable. A few examples from the top of my head:

    Materials science, that is ab initio simulations, moldyn, you name it. This alone probably uses > 50 % of all supercomputer cpu time in the world. By comparison, weather prediction and nuke simulations is small potatoes (or shall we say, the simulations as such are big, but the number of people engaged in weather prediction or nuke simulation is really small compared to all the supercomputing materials scientists).

    CFD, the automobile and aerospace sectors are big users.

    Electronic design.

    Seismic surveys, the oil industry uses lots and lots of supercomputers to find oil deposits.

    Biology. Gene sequencing, moldyn simulations of lipid layers and whatever.

    Climate prediction, somewhat related to weather prediction. Official purpose of the Earth Simulator.

    All of the examples above could easily use almost any amount of cpu power you can throw at them. The only thing that stands between a lot of scientists and improved understanding of the world is computing power.
  • by adzoox ( 615327 ) on Tuesday October 26, 2004 @06:05AM (#10629416) Journal
    You say it's comparing Apples to Oranges but its not really ...

    The VT Supercomputer specs vs the Cray specs page you pointed to:

    CRAY 460 GFLOPS per cabinet (96 processors @ 2.4 GHz)

    Apple - if my math is right - 420 GFLOPS (100 processors @ 2.0Ghz)

    The new specs for the specialized VT Supercluster are pretty impressive.

    Their throughput and interconnect is most likely weaker - but still VERY strong with fiber channel.
  • Re:imagine a... (Score:5, Informative)

    by crimsun ( 4771 ) * <crimsun@ubu[ ].com ['ntu' in gap]> on Tuesday October 26, 2004 @08:35AM (#10629819) Homepage
    It's not just hardware: the amount of non-parallelizable code in parallel applications impacts scalability most tremendously.

    The upper bound on speedup is generally Amdahl's law []. Plainly, the efficiency approaches zero as the number of processes is increased. Generally we consider the major sources of overhead to be communication, idle time, and extra computation. Interprocess communication is considered negligible for serial programs in this context (we consider message passing). Idle time ends up contributing to overhead, because processes idle awaiting information from others. Extra computation is virtually unavoidable at some point; for instance in MPI's Single Program Multiple Data model, each process in tree-structured communication other than the root is eventually idled prior to the completion of computation, and each process determines IPC at some point based on rank.

    There are notable exceptions to Amdahl's law, however; Gustafson, Montry and Benner wrote about such in Development of parallel methods for a 1024-processor hypercube, SIAM Journal on Scientific and Statistical Computing 9(4):609-638, 1988.
  • by Anonymous Coward on Tuesday October 26, 2004 @08:56AM (#10629932)
    It's not a customized HT interconnect. There's a dedicated SeaStar router chip that connects via HT to the uniprocessor Opteron + RAM node, but the actual fabric connecting the SeaStars is proprietary (each SeaStar connecting to six others via 7.6 GB/s interconnects, forming a 3D grid fabric expandable to 30K+ nodes).

    That's why they use mere 100-series Opterons: they need only one HT link per CPU. Because the whole is not based on HT interconnects.

    Really, loosely-coupled cluster my ass. This machine *is* capable of record-breaking single-task performance. Read the product pages again.
  • by Dink Paisy ( 823325 ) on Tuesday October 26, 2004 @09:01AM (#10629962) Homepage
    From the documents, it looks like it runs Linux on the management nodes and Catamount on the compute nodes. The idea is you can do what you like with the general purpose nodes, but for the compute nodes, you run a lightweight operating system that has low overhead, minimal services and predictable scheduling. BlueGene/L works the same way; it runs Linux on the management nodes and a custom operating system on the compute nodes. Compute nodes likely provide scheduling for only the number of threads that run on the node, communication through MPI and some proprietary API, and basic debugging facilities. Compute nodes probably lack normal OS services like network, disk, or even a console.
  • Re:imagine a... (Score:2, Informative)

    by ant_slayer ( 516684 ) on Tuesday October 26, 2004 @09:31AM (#10630169)
    My apologies, but I couldn't help but think that you'd be *really* lucky to get 50x out of 64 CPUs. Examine the following:

    1 CPU @ 1.00x -> 1.00 / 1 = 1.000
    2 CPUs @ 1.95x -> 1.95 / 2 = 0.975
    4 CPUs @ 3.20x -> 3.20 / 4 = 0.800
    64 CPUs @ 50.0x -> 50.0 / 64 = 0.783

    Pop that into an spreadsheet and look at the graph.

    That is not linear, in fact, it's non-linear in the direction that *helps* more and more processors. If the decline from 4 CPUs to 64 CPUs is a mere 1.7% efficiency compared to the 17.5% drop from 2 to 4, then, by golly, I'm going to cram hundreds of CPUs in there and see it tail off. Hello amazing performance.

    Instead, reality is that the dynamics change. You can't evaluate "equivalent performance" to a single processor system. There is no reasonable metric with which to do so.

    -Ant Slayer-

  • by bmajik ( 96670 ) <> on Tuesday October 26, 2004 @09:45AM (#10630255) Homepage Journal
    Because, IIRC, that was the one that they were only building one of, and when the govt cancelled the order, thats when Cray Research went under.

  • by mrdogi ( 82975 ) <> on Tuesday October 26, 2004 @10:36AM (#10630692) Homepage
    Then you had this system that was running *in* liquid!

    Before that was the Cray-2 (a.k.a World's most expensive aquarium")? In case anybody's interested, I believe they used Fluorinert as the liquid, as it wouldn't swell the PC boards, short anything out, or cause anything to corrode.

    A note, the Cray-3 was created by Cray Computer Corporation of Colorado, whereas the Cray-1 was made by Cray Research of Wisconsin. In ~1990, Seymore wanted to start working on computers using gallium arsenide instead of silicon, since they could switch faster. Cray Research didn't want to try anything so revolutionary, so Seymore headed to Colorado with a group of people and started CCC. Unfortunately, they apparently made exactly one Cray-3 [], then folded.

    Seymore Cray was quite the Übergeek.

  • by SuperQ ( 431 ) * on Tuesday October 26, 2004 @10:39AM (#10630713) Homepage
    You're leaving out a lot of stuff necessary to make a cluster:

    #1 RAM: $3000 for the G5 cluster node includes 512mb ram. Most places demand atleast 2gb ram per CPU, we require 3GB ram per CPU in all new system purchases. This brings the node price (from to $6500
    200x $6500 = $1,300,000

    #2 Racks and power: Each rack can hold about 32 machines (without getting way to hot/dense) for 200 nodes, this would be about 7 racks.
    7x $1200 = $8400

    #3 Interconnect: No HPC system is usefull without an interconnect. An 80 node myrinet system was $250,000, so at $3125/node you're looking at:
    200x $3000 (estimate) = $600,000

    #4 Networking: you need a network switch and cabling to connect all the nodes... gige is a must these days. Let's say we go cheap with HP ProCurve 2848 Layer2 managed for $3300 each we need 7 of those, one for each rack cabinet.. with trunking we can get 4gb back to a central switch. not too bad. Say we add $10/cable for pre-made patch cables, (length averaged) that's about $2250 in cables.
    7x $3300 + $2200 = $25,300

    #5 Disk: You quoted a bunch of XservRaid's without any kind of apple care.. with IDE raid.. I'm not going without some kind of support on it. Oh wait.. 1 file server is NOT enough to handle 200 nodes of HPC.. and apple doesn't have a clustered filesystem. You're going to have to go with Linux/Intel with RedHat GFS for that one (yes, there are other options, but I know GFS)
    Say we do 4 XserveRaid's with applecare:
    4x $16,000 = $64,000
    We also need for dual whatever intel machines: (i'll be nice and include F-C cards in the price)
    4x $3000 = $12,000
    We also need a F-C switch to link all the nodes:
    SanBox 8 port $5200 and 8x SFP modules $750 = $11,200
    I'll pretend like we don't need GFS software support, but most places would want it. (it's another $20,000 or so, but eh.. we want cheap solution)
    Disk total comes to: $87,200

    Price so far: $2,020,900

    And that doesn't even include setup!
  • Re:software (Score:4, Informative)

    by flaming-opus ( 8186 ) on Tuesday October 26, 2004 @11:00AM (#10630907)
    This split microkernel architecture has been in use for a long time on big mpp systems like the paragon and the t3e. The software base (catamount/linux) is new, but the design is old.

    catamount is the kernel that runs on the compute nodes. IT's a tiny kernel that packages up the OS service requests, and sends them, over the interconnect, to an OS or I/O node, which does the real work of the operating system. catamount is a descendant of PUMA, which came from Cougar. These are heavily derived from work done at caltech. (I believe CMU, and one of the UTexas schools also played a role, but am not sure). The idea is that the microkernel is small and unobtrusive, and it gets the hell out of the way so the application can use the CPU as much as is possible.

    The OS and I/O nodes run linux, and provide services to the compute nodes. This is probably, but it could just as easily be running as a user-space daemon on the OS node. (Though you might have to do some mem-copys that way, which would lower performance)

    NOTE: Though these nodes take advantage of some of linux's features (like the lustre file system) they do NOT necessarily implement these features for the system as a whole. They probably provide a minimal set of features necessary for the sorts of problems that the xt3 runs. All the scheduling work that has gone into more recent linux kernels is of little use, as the compute nodes have their own scheduler, probably more closely tied to the batch dispatcher than to the linux kernel. To say that the system runs linux is true, but a little misleading. It's a very different linux than what runs on my desktop, and it's used in a very different way.
  • Wall Street (Score:3, Informative)

    by Moraelin ( 679338 ) on Tuesday October 26, 2004 @11:25AM (#10631153) Journal
    You have to understand though that the stock market's expectations have nothing to do with whether the company is doing well or not.

    Surrealistic point in case: at one point 3Com had a lower market value than the Palm daughter-company. Basically if you subtract the value of the Palm shares, the whole rest of 3Com was actually worth a _negative_ value for the stock market.

    And we're talking divisions which were making a tidy profit. Yet they were apparently worth a _negative_ number.

    No, it's not a joke. Roll it around a bit in your head to fully grasp how completely sad and idiotic that is. Real profits, real assets, worth a negative number of dollars. Stupid.

    Or at the other end of the spectrum you have Microsoft whose stock market value is _way_ above the value of its assets. Without paying any dividends or acquiring much in the way of long term assets, people just flocked to drive the price up and make Bill Gates rich. Basically to give their money to Bill Gates and not even get a Windows CD in return.

    The thing is, however, the stock market value has _nothing_ to do with a company's value or profits. The value of a share is only worth as much or as little as people want to believe it is. It is like Monopoly (the board game, not MS;) money: if tomorrow we decide that the blue bills are worth 10% more and the red bills are worth 10% less, who's to argue with that.

    The _only_ reason the stock market on the whole goes up is basically because yearly people dump more money into it. Basically it goes up just because people want to believe it's going up, and put their money where their belief is.

    And the way those values fluctuate, now that just has to do with hype and greed.

    The stocks worth buying are those who'll make you a profit: typically meaning they'll raise in value. The stocks worth selling are those who don't.

    Except with no intrinsic value it becomes a game of guessing what the other lemmings will buy (driving the price up), and what the other lemmings will sell (driving the price down.)

    One thing that makes lemmings buy is the prospect of growth. Hence, hype is good. Hence, yes, shares in a cancerous tumor would sell like hot cakes and rocket sky high in price.

    Hence, conversely, shares in a company which doesn't grow or otherwise cause more lemmings to buy, are not worth holding on to. Because they won't bring a profit. If Microsoft truly plateaued and didn't pay dividends either, regardless of how much profits it made at that point, its shares would plummet. Because between holding onto a share in MS that doesn't bring a profit, and investing in some startup that grows quickly, the second promises more of a ROI.

    Now that's all a bit of an over-simplification.

    Of course, there are other factors. Like just paying dividends to give people a reason to hold onto your shares even without massive hype and growth. (See why MS started doing that when its market explosion slowed down.) Or like fraud: "analysts" just telling lemmings what to buy, and thus drive up the price of the shared owned by the "analyst" and his/her clients. Etc.

    But as a quick intro to the madness of the stock market, it will have to do.

As of next Tuesday, C will be flushed in favor of COBOL. Please update your programs.