Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Cray Wins $52 Million Supercomputer Contract 133

The Interfacer writes "Cray and the U.S. Department of Energy (DOE) Office of Science announced that Cray has won the contract to install a next-generation supercomputer at the DOE's National Energy Research Scientific Computing Center (NERSC). The systems and multi-year services contract, valued at over $52 million, includes delivery of a Cray massively parallel processor supercomputer, code-named 'Hood.'"
This discussion has been archived. No new comments can be posted.

Cray Wins $52 Million Supercomputer Contract

Comments Filter:
  • by edwardpickman ( 965122 ) on Thursday August 10, 2006 @09:44PM (#15886006)
    Hood is within specs for Vista. A big relief for Cray since they weren't sure it'd meet memmory specs for Vista.
    • by Bruce McBruce ( 791094 ) on Thursday August 10, 2006 @09:54PM (#15886071)
      With those 39 terabytes of memory, they might set their sights on the stuff dreams are made of - Vista Premium.
    • Overheard at Microsoft:

      Boss1: Cray has developed a computer that actually runs Vista fast

      Boss2: I see, let's remove that "optimization" box from the gantt chart then..

      Boss1: But customers will compain that they can't afford to buy a supercomputer

      Boss2: What? it runs AMD! how can it be expensive....those morons
    • Why is the DOE such a major American Government Organization? What have they provided the American people with in lieu of our upcoming $5 / Gallon Gasoline (after the Perdhoe pipeline gets shut down)?

      And why is the DHS (which failed miserably during Katrina) more prevalent/widely-known?

      • Re:Just anounced (Score:1, Informative)

        by Anonymous Coward
        Guess somebody's a liitle mad about having to leave their H2 parked.

        The part of the DOE that uses supercomputers does nuclear simulations. They don't give a crap about your unwise car choice.
      • Re:Just anounced (Score:3, Informative)

        by crgrace ( 220738 )
        The DOE runs our system of national laboratories, and is the successor to the Atomic Energy Commission. They aren't all that concerned with gasoline, as that is a small part of their work. They mostly work on nuclear weapons, fusion research, high-energy physics, renewable resources, etc. I used to work at Lawrenece Berkeley National Lab designing subatomic particle detectors. I couldn't give a rats ass about how much you spend for gas.

  • 52 million dollars over a couple of years.... Not much to keep a high-end computer company running on.
  • Oh, wow! Does it run Virtual Machines?
  • by Anonymous Coward
    That the DOE isn't hoodwinked by using such an energy consuming device to research energy consumption.
  • Pinky... (Score:3, Funny)

    by Null Nihils ( 965047 ) on Thursday August 10, 2006 @09:50PM (#15886033) Journal
    are you pondering what I'm pondering?

    I think so, Brain! NERSC! POIT!
  • by free space ( 13714 ) on Thursday August 10, 2006 @09:50PM (#15886037)
    The system uses thousands of AMD Opteron processors...


    Because of it's power requirements, Cray's only possible customer was the Department of Energy :)

  • by Jerk City Troll ( 661616 ) on Thursday August 10, 2006 @09:51PM (#15886048) Homepage
    Cray finally figured it out. I have been saying for years: HPC/Beowulf clusters are about building machines around problems

    That is why Clusters are such a powerful paradigm. If your problem needs more processors/memory/bandwidth/data access, you can design a cluster to fit your problem and only buy what your need. In the past you had to buy a large supercomputer with lots of engineering you did not need. Designing clusters is an art, but the payoff is very good price-to-performance. A good article on this topic is the Cluster Urban Legends [clustermonkey.net], which explains many of these issues.
    • Try doing massive calculations using matrixes on a cluster? Large datasets need to share the same memory and only a super computer can provide it.
      • Not like I care. I just copy-and-pasted that comment from some random Cray article on Slashdot a while ago. I just want to boost karma so I can keep trolling without getting banned.
      • Is this really a cluster? Nothing about the system says conventional cluster anyways, just about everything is proprietary to Cray. If it's a cluster then it's certainly not in the original spirit of Beowulf clustering. It does not link together inexpensive servers with non-proprietary networks. Each Opteron core's hyper transport connection links directly into a very high performance network.
        • From TFA:

          "The system uses thousands of AMD Opteron processors running tuned, light-weight operating system kernels and interfaced to Cray's unique SeaStar network."

          That's a cluster.
          • I don't know what SeaStar is, but if it's anything like NumaLink/CrayLink, it's not a cluster. There's a very fine line somewhere and I'm not sure what it is- but look at say an Origin 3000 server- it has 4 or more node boards in one chassis that all communicate via CrayLink/NumaLink- does that make it a cluster? Ok- now scale it out to 128 chassis, is it a cluster then?

            I thnk the distinction actually is, that with CrayLink/NumaLink, there's only one OS running, and the other chassis are all controlled by t
            • its pretty much a cluster. the seastar is a message
              passing engine. its distributed memory and the OS
              doesn't share any state (except for a library that
              does filesystem indirection)
            • Mmmm... yeah, it's a cluster. I think the distinction is one of convenience, not reality. Multiple processors run multiple instances of the same code; the only 'real' distinction between separate machines 'clustered', running identical kernels, and two chips on the same board, is the ease with which the processors can communicate. The application must be designed to utilize dual processors, just as the application must be designed to utilize multiple systems in a cluster. The latency is lower, and the cost
              • I don't think I could call this a cluster. The interactive nodes of a XT3/XT4 are based on linux but (for now) the other 19k will run an OS called Catamount. It can only run one thread, the user app. There is no user space OS at all. Almost all actions that you would think an OS/kernel should do have to get compiled into the users porgram. Linux cluster, this is not.
      • Perhaps I misunderstood your intent, but I feel it necessary to point out to you that the parent's discussion was directed at *this* cluster - the Cray in TFA. From TFA:

        "The system uses thousands of AMD Opteron processors running tuned, light-weight operating system kernels and interfaced to Cray's unique SeaStar network. "

        That's a cluster. It's also a supercomputer. Maybe you're looking for the word 'Mainframe'? Regardless, the article the parent links to is a really good discussion of clusters and their v
      • Large datasets need to share the same memory and only a super computer can provide it
        Not anymore [wikipedia.org]

    • Can you imagine a Beowulf Cluster of these bad boys? >:)
    • wtf is that dreamhost referrer crap in your url? Even if you are one of the top supercomputer experts on planet and you figure something everyone oversees, you should be modded down.

      Slashdot is NOT digg.com , you can't advertise referrer URL here. I think everyone should report your type to slashdot admins and they should put an end to this trivial referrer crap URLs became fashion again.

      Paste some stuff from some trivia source like wiki to post something everyone will find interesting with referrer url and
    • The XT3 is not really a cluster. True, it's a message passing machine with an interconnect between commodity processors, but that interconnect is very highly integrated into the system design, and the software stack is very customized. The line around what is, and is not, a cluster is a fuzzy line, but this is not a cluster in the language of government supercomputer procurement where 'cluster' means 'commodity cluster'.

      Nersc, and a number of other dod/doe labs are buying a new generation of "true" supercom
  • Hood? (Score:5, Funny)

    by nsushkin ( 222407 ) on Thursday August 10, 2006 @10:09PM (#15886142)
    It's named "Hood"? What are they going to calculate, protein folding in ice cream? ;)
  • I thought SGI owns Cray now? Wouldn't that mean that they made a deal with SGI?

    Even so- I doubt 52 million is enough to save SGI in the long haul- especially if anything more than a few percent goes to actual hardware/research costs (and it will).
  • good to see... (Score:3, Interesting)

    by Connie_Lingus ( 317691 ) on Thursday August 10, 2006 @10:39PM (#15886331) Homepage
    the Cray brand making a comeback in the super-computer area. I can remember fondly the days of my engineer CS days longing looking at those Cray supercomputers (was that a couch around it?!? COOL!) in awe and just wondering what they could possibly be computing with 512M of RAM and a 2G super-cooled processor. SUPER COOLED!

    Then it was back to my PDP-11 ...reality bit.
  • I don't have any concept of scale when it comes to price for these things. Is this a big contract as far as supercomputing contracts go? The biggest? Average?

    Will this thing be cooled with that cool nonconductive liquid goo stuff that it all just bathes in?
    • $52 million is ultra-cheap in the supercomputer world.

                Brett
      • Huh? The COLSA G5 based supercomputer which is currently ranked #21 in the world only cost $5.8 million so I wouldn't call 10x that much Ultra-cheap. The IBM JS21 at #23 is in the same balpark with a retail cost per CPU of $2500 and 2,048 processors (I couldn't find exact pricing for the unit, only the blades). Sure, breaking into the top 10 is expensive, but that's to be expected when even #10 has over 5K processors.
    • by Anonymous Coward
      I work in the industry. 'Course it's easy for an AC to say that, isn't it?

      $52M is rather large nowadays. At least, for a 'commodity' part cluster it is. For a 'vector' supercomputer, it may be only medium sized.

      You can easily break the top 50 for less than $10M. A couple thousand nodes, each with two dual-core Opteron/Xenons, InfiniBand or Myrinet (maybe 10GigE), and a compiler that optimizes better than gcc... no problem.

      That being said, NERSC is a pathologically tough customer. Cray will have to work
  • specmarks? (Score:1, Funny)

    by Anonymous Coward
    Back in the day, one of the selling points of the soon to be released Cray 3 was that it was so fast, it could do an infinite loop in only 4 days! How have things progressed since '91:: Does an infinite loop only take a day or few hours now?

  • Does anyone know who else bid on this contract? Was BlueGene a contender?

    It would be interesting to know the other bids and their performance ...
    • Re:Who else bid? (Score:3, Interesting)

      by Kadin2048 ( 468275 )
      That was my first reaction: somebody at IBM is in deep shit.

      It seems like they had a lock on the last few big DoE supers (and supercomputer sales in general); now all of a sudden we see Cray getting back in there. I wonder if IBM stepped on somebody's toes and got given the boot on this one (it's small, maybe this is just a spanking), or if they've gotten behind in the research and power/dollar worlds because they were doing so well for so long? Or is this just the government trying to spread the love aroun
      • Nah, DoE and DoD like to spread the wealth around enough to keep a couple suppliers alive and at least somewhat healthy. They don't like the idea of only having one supplier to turn too because they know that would cost them more than throwing some contracts at the less robust suppliers. Btw it's not just in computing that this happens, but in all defense contracting.
      • Well, the Thinking Machines story is a little more complicated than that.
        http://www.inc.com/magazine/19950915/2622.html [inc.com]

        Basically, IBM and Cray got caught by surprise when the MPPs, of which TMC was just one, came onto the market. Eventually they got their act together and put out the SP and the T3D, which were both good products. Thinking machines got hit by the same post-cold-war lull in supercomputer buying that hit everyone else, and they just weren't big enough to ride it out. Even at their peak, they w
    • Re:Who else bid? (Score:3, Informative)

      by cannonfodda ( 557893 )
      I would imagine that IBM probably did bid. They would be crazy not to for $52M.

      But....... "the Hood system installed at NERSC will be among the world's fastest general-purpose systems".

      Nersc are looking for general purpose computing systems to fill the needs of 2500 users. Blue gene is blindingly fast at some things, but general purpose it aint. I've benchmarked both the XT3 and Blue Gene with a set of general Scientific Codes and the opteron delivers much better general price/performance for a representati
      • The advantage of the Blue Gene is that it is relatively simple to indeed go low level and manually tune your application at the assembly level. But I agree that a BG isn't the best solution to everything, and there are still some issues IBM needs to work on.
    • This is actually related to this story [slashdot.org] that ran on Slashdot a month ago. Turns out the Inquirer article that everyone ripped to shreds for being light on details was right all along. (I saw sanitized excerpts from e-mails regarding the incident, so I can tell you that Intel's Woodcrest chips performed abysmally in the DOE's testing compared to the Opterons.) The competitor that lost was IBM and the reason was because of problems with Woodcrest. The supercomputer in question will be running on 24,000 quad c

      • Wait a minute! Are you saying that IBM bid a machine based on Woodcrest? If IBM bid
        anything here, it would have been either a BlueGene, or, perhaps, something like the
        ASCI machines, which are conventional PowerPCs with a fast interconnect. Hard - no - impossible -
        to believe that IBM would have bid an Intel processor.
        • I agree that it sounds crazy. I'm just passing along the information I was given. Your impression telling you that there's no way IBM would bid an Intel chip makes a lot of sense. It's not been their standard M.O. in the past. All I know is that Cray won the bid with Opterons, the e-mails I read gave unfavorable reviews of Woodcrest chips, and that Woodcrest is supposed to kick the snot out of Opteron. In any case, the fact that Cray won the bid with an Opteron-based supercomputer should be more than a litt

  • ...systems and multi-year services contract, valued at over $52 million

    Ummm... no offense to Cray, but that's pretty f*ing lame.

    This, ladies and gentlemen, is why we need to 'encourage' our kids to desire scientific jobs.
    • That's a nice sentiment but you can't teach your kids to desire scientific jobs. You can teach your kids about science and see if they take to it. No matter your enthusiasm, your kids might lean to the artsy-fartsy, literature, or driving a bus.
  • by Anonymous Coward on Thursday August 10, 2006 @11:46PM (#15886669)
    Unfortunately this seems to be one of the topics that the slashdot bias and ignorance comes out in full force on.

    * Clusters can not compete with supercomputers. They aren't even in the same market space. Cray doesn't make clusters, and clusters have not taken away their business.

    * Cray doesn't take off the shelf hardware and sell it as fancy clusters. Actually look into the details of these machines. While processors sometimes are off the shelf much of the surrounding hardware and software is custom.

    * This 50 million contract is one of many that cray has. They also just recently in the news got a 200 million dollar contract. They also are a contender in the DARPA HPCS thing. That could be a lot more if they get it. They aren't dieing.

    * They aren't owned by SGI any longer. They were bought from SGI by Tera who renamed themselves cray.

    * The top500 list is nonsense. It is based off of 1 benchmark (linpack.) That benchmark doesn't stress the interconnect too much and can allow clusters to appear to compete with supercomputers if you manage to ignore all the other factors. The number of teraflops has very little to do with performance. To see a more well rounded and thought out measurement of top systems check out HPCC's website. http://icl.cs.utk.edu/hpcc/hpcc_results.cgi [utk.edu]

    * Bluegene doesn't kick Cray's ass. See the above and then see how it really performs overall. In some areas it does better and in others it just gets destroyed. Depending on the real world problem a full size blue gene may not even be able to perform as well as a much smaller Cray.

    If you don't know what you are talking about look it up before posting. Just because it's the common belief doesn't mean there is any truth to it!
    • THANK YOU for setting the record straight. You are correct; clusters are very different. Some types of problems can be broken into mostly separate chunks of work; these work well on clusters. But for those types of problems which depend on a lot of inter-processor communication (e.g. the results of one computation are required for a significant number of subsequent computations), conventional clusters don't cut it. In these cases everything comes down to the bus between processors -- how they are connec
    • It is always good to see a /.er bust out with a few facts rather than the usual bad puns, stale jokes, half-baked opinions and misconceptions.

      Cray making a comeback!? Now if that don't beat all.

      What's next? Borland selling a good, cheap Pascal compiler again?

    • I dunno man. First of all, asking people to mod you up is kinda lame.

      Secondly, to say the computers that Cray sells is not "off the shelf" can be argued depending on how you look at it. Today's Crays are not the fully proprietary machines of yesteryear. They all use AMD Opteron processors and leverage the onboard memory controller and hypertransport bus to make a processor fabric simple. The main custom items in the system are the "interconnect routers" that tie all the hypertransport busses together
      • "Benchmarks of any multiuse system are never universal. They best they can do for a large list like that is to use a benchmark that can reasonably represent a common use of such systems."

        The linpack benchmark used to do the top500 list is a basic, dense matvec solver algorithm. (See wikipedia : http://en.wikipedia.org/wiki/LINPACK [wikipedia.org]) This algorithm used to be the core of most scientific codes, back in the days when you would simply use the computer to solve a simple (but large) set of equations. In the las
        • You are correct to argue that using a LINPACK-based benchmark for a molecular dynamics problem is foolish.

          However, the arena of molecular dynamics is one in which clusters and MPP in general are easily the better choice than a monolithic supercomputer. On the one hand, you can make each node an automaton to describe a single particle in a very object-orientated fashion. On the other hand, you can make each node representative of a spatial cell whereby the boundaries interact with those of its nearest-neighb
    • * Clusters can not compete with supercomputers. They aren't even in the same market space. Cray doesn't make clusters, and clusters have not taken away their business.

      That's not very true. Supercomputers will have a solid market into the foreseeable future, but they certainly are facing competition from improvements in clusters.

      Sometimes interconnect speed can be reasonably traded-off in exchange for a significantly reduced price, or for additional CPU power, local RAM, etc. Often, problems that are gener

    • AC has a point about benchmarks but I would say it's debatable as to whether the XT3 doesn't qualify as a supercomputer.....

      To quote a collegue of mine "The interconnect IS the machine!"

      The primary difference between a supercomputer and a cluster is the degree of itegration between the computing elements. You can demand that a Supercomputer MUST have a crossbar switch or similar close coupled interconnection method but that only scales so far. For a good example have a look at the earth simulator [jamstec.go.jp], you'r

    • The top500 list is nonsense. It is based off of 1 benchmark (linpack.)


      The other problem with it is that it only counts systems that people want you to know exist. For example, it's a safe bet that the NSA has multiple systems that would qualify but are not listed. There are probably a significant number of systems like that in the world - so calling it the 'top 500' is just silly.
    • I'd say go ahead and mod him up.

      He's right. For *ALL* computing tasks, using the right tool for the job can increase performance exponentially. Slashdotters should know this -- A 400mhz GPU can outperform a 3ghz CPU on vector and matrix operations by huge leaps and bounds

      Clusters are just another tool that work very well for very specific jobs, and very poorly for others. These jobs are mainly those that can be massively parallelized (ie. brute-forcing a math equation -- Computer A should try these valu
    • Unfortunately this seems to be one of the topics that the slashdot bias and ignorance comes out in full force on.

      I agree completely.

      * Clusters can not compete with supercomputers. They aren't even in the same market space. Cray doesn't make clusters, and clusters have not taken away their business.

      This is not exactly a wrong statement, but it is incredibly broad. First off, Cray does make clusters. At a fundamental level, the basic separate-box clusters connected by Ethernet are the exact same thing as a

  • The Hood supercomputer at NERSC will consist of over 19,000 AMD Opteron 2.6-gigahertz processor cores...

    The Ultimate Gaming Machine!!!
  • Just think (Score:1, Troll)

    by Slithe ( 894946 )
    Imagine a Beowulf cluster of ... oh wait!
  • Holy cow, I didn't even know Cray was still in business! And, does it run Linux?
  • Hood? (Score:2, Funny)

    by Tavor ( 845700 )
    Will it be promptly sunk by the a German supercomputer named Bismark?
  • "The system uses thousands of AMD Opteron processors" I will create a new Supercomputer using the new Intel chips and call it the Bismark
  • Cray computers has failed miserably in the marketplace. The solutions it produces is completely out of whack with the cost of solving the problem. This 52 million bucks is just welfare for PhDs in Computational Fluid Dynamics, Computational Electromagnetics, etc. People building ivory towers in the skies with their heads in clouds ...

    America would be better served if we sink the money in creating interoperability standards and creates ways to increase competition in the computational industries. Every c

    • Well, supercomputers have become a dime a dozen. Or rather, clusters have become a dime a dozen. However, a lot of the really demanding tasks that high-end supercomputer users need to do, are not terribly well served by clusters. Nersc has cluster systems, they know how to use them, and what they are good for. The fact that they are buying a cray indicates that their needs were not well met by the clusters.

      Furthermore, programming a modern cray is not very much like programming a YMP. The code is structured

Remember to say hello to your bank teller.

Working...