Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Announcements

$24.5 Million Linux Supercomputer 379

An anonymous reader wrote in to say "Pacific Northwest National Laboratory (US DOE) signed a $24.5 million dollar contract with HP for a Linux supercomputer. This will be one of the top ten fastest computers in the world. Some cool features: 8.3 Trillion Floating Point Operations per Second, 1.8 Terabytes of RAM, 170 Terabytes of disk, (including a 53 TB SAN), and 1400 Intel McKinley and Madison Processors. Nice quote: 'Today's announcement shows how HP has worked to help accelerate the shift from proprietary platforms to open architectures, which provide increased scalability, speed and functionality at a lower cost,' said Rich DeMillo, vice president and chief technology officer at HP. Read Details of the announcement here or here."
This discussion has been archived. No new comments can be posted.

$24.5 Million Linux Supercomputer

Comments Filter:
  • Other OSes (Score:5, Interesting)

    by frizz ( 91565 ) on Wednesday April 17, 2002 @09:34AM (#3357890)
    What OSes do the other top 10 supercomputers run?
    • Re:Other OSes (Score:5, Informative)

      by hawkstone ( 233083 ) on Wednesday April 17, 2002 @10:01AM (#3358153)
      1. IBM ASCI White,SP Power3 375 MHz
      Lawrence Livermore National Laboratory

      It runs AIX.

      2. Compaq AlphaServer SC ES45/1GHz
      Pittsburgh Supercomputing Center

      Haven't used it, but I'm guessing Tru64.

      3. IBM SP Power3 375 MHz 16 way
      NERSC/LBNL

      Once again, AIX.

      4. Intel ASCI Red
      Sandia National Labs

      A poor home-grown OS (no offence) called Cougar or TFlops which doesn't even support X11 or sockets.

      5. IBM ASCI Blue-Pacific SST,IBM SP 604e
      Lawrence Livermore National Laboratory

      Can you say AIX?

      6. Compaq AlphaServer SC ES45/1GHz
      Los Alamos National Laboratory

      I assume Tru64.

      7. Hitachi SR8000/MPP
      University of Tokyo

      No idea. Sorry.

      8. SGI ASCI Blue Mountain
      Los Alamos National Laboratory

      IRIX.

      9. IB SP Power3 375 MHz
      Naval Oceanographic Office

      Don't know for sure, but you can bet it's AIX.

      10. IBM SP Power3 375 MHz 16 way
      Deutscher Wetterdienst

      Again, I'm sure it's AIX.

      All Unix. No, no linux on there yet, but Pacific Northwest will be right up there near the top, and Lawrence Livermore is also probably getting a linux cluster of almost that size pretty soon. That will make two in the top few slots.

      No Windows on these puppies! ;)

      • Re:Other OSes (Score:2, Interesting)

        by rutledjw ( 447990 )
        I may be ignorant, but I is a college graduate. Doesn't Hitachi compete (to a degree) with IBM in the Big Iron class of machines? Wouldn't that suggest an OS/390-like OS? Just guessing.

        Another thing that I just thought about, maybe someone can answer for me. What about OS/390? I thought that was their big mainframe OS. Is this a speed issue with the OS, clustering limitations (certianly not) or maybe ease of use (people would rather deal with *nix than a 'frame OS)?

        Any input?

        • Re:Other OSes (Score:5, Insightful)

          by markmoss ( 301064 ) on Wednesday April 17, 2002 @12:09PM (#3359008)
          What about OS/390? I thought that was their big mainframe OS.

          Supercomputer != Mainframe

          Supercomputers are just for calculations on massive arrays. Mainframe OS's are designed for government & large corporation databases, etc. They are heavily loaded with "frills" that are unneeded on a pure number-cruncher; they improve database reliability and do many other useful things in the data-processing environment, but they're just wasted cycles on a supercomputer.

      • 4. Intel ASCI Red Sandia National Labs

        A poor home-grown OS (no offence) called Cougar or TFlops which doesn't even support X11 or sockets.

        Yeah, everybody knows any computer that can't support netris or even plain old tetris is poor indeed.

      • You are correct on the Compaq entries all being Tru64. The AlphaServers will run Linux and Compaq will sell you one, but no buyers yet among these big machines.

        Are any Search engines running Windows yet? I would assume the msn.com search engine runs Windows, but I don't know for sure... If so, I'd believe it's the only one.

        There are some facts that speak so loudly that MS Marketing can't overcome no matter how hard they try.

      • Re:Other OSes (Score:2, Insightful)

        by charmer ( 205543 )

        4. Intel ASCI Red
        Sandia National Labs

        A poor home-grown OS (no offence) called Cougar or TFlops which doesn't even support X11 or sockets.


        Why does a parallel machine need X11 or poor (slow) communication primitives? Why should a full OS run on all the processors ? The OS really needs to get out of the way of the computations where every microsecond counts.

        charmer
      • GOOGLE! (Score:5, Informative)

        by Jagasian ( 129329 ) on Wednesday April 17, 2002 @11:10AM (#3358578)
        What about Google?!? It should qualify as a Linux supercomputer. For those who don't know, Google, the popular search engine, uses a huge cluster of PCs running Linux.
        • Re:GOOGLE! (Score:5, Interesting)

          by Julian Plamann ( 449854 ) on Wednesday April 17, 2002 @11:32AM (#3358758) Homepage Journal
          Yep, Google runs on a cluster of approximately 4,000 1U servers. Each can be pulled and replaced including automatic configuration/loading of the operating system and software configuration within about 20 minutes I believe. Pretty neat setup.
      • Linux IS Unix (Score:4, Informative)

        by leereyno ( 32197 ) on Wednesday April 17, 2002 @11:13AM (#3358601) Homepage Journal
        AIX is Unix
        BSDI is Unix
        HP-UX is Unix
        Solaris is Unix
        Sun-OS is unix
        Digital Unix...is Unix
        FreeBSD is Unix
        NetBSD is Unix
        OpenBSD is Unix
        A/UX is unix
        Xenix is unix
        Unixware is unix
        SCO Unix is Unix
        NextStep is unix
        Unicos is unix
        Irix is unix
        Ultrix is unix

        and yes, Linux is Unix.

        It may not be Unix(tm), but it certainly is unix, at least as much as any of the above operating systems are. Whether or not an OS has one line of code from Thompson and Ritchie or BSD is irrelevant. What matters is what kind of a system its code implements. The code for Linux, including all of the GNU components and other userland parts, implement an operating system that is at least as similar to any of the above mentioned OS's as they are to one another. I don't know just exactly how compliant Linux is with the various posix standards, but I have heard it referred to as posix compliant, and I know that NO version of unix is completely compliant.

        If it looks like a duck, walks like a duck, and quacks like a duck....its a duck.

        Lee
        • AIX is Unix
          BSDI is Unix
          HP-UX is Unix
          Solaris is Unix
          Sun-OS is unix
          Digital Unix...is Unix
          FreeBSD is Unix
          NetBSD is Unix
          OpenBSD is Unix
          A/UX is unix
          Xenix is unix
          Unixware is unix
          SCO Unix is Unix
          NextStep is unix
          Unicos is unix
          Irix is unix
          Ultrix is unix
          Linux is Unix.

          But just remember: GNU is Not Unix
        • but remember: XINU Is Not Unix.
  • by FortKnox ( 169099 ) on Wednesday April 17, 2002 @09:37AM (#3357924) Homepage Journal
    ... Cause if they put WinXP Pro on it, the project would cost:
    $24,500,399.98
    Which was juuust over budget!

    BTW - Can you put in code during the "post slashdot story" to automatically close the <I> tags? I don't think that would be too difficult to add...
  • Sigh... (Score:4, Funny)

    by buckeyeguy ( 525140 ) on Wednesday April 17, 2002 @09:37AM (#3357928) Homepage Journal
    all that capability and all I can think about is how much power the dang thing would consume... it'll take one big, big UPS/power conditioner.
  • or did the author forget to end an italic tag in this story?
  • Uhmm... (Score:5, Funny)

    by qurob ( 543434 ) on Wednesday April 17, 2002 @09:38AM (#3357939) Homepage

    Scheduled to be fully operational in early 2003...


    Won't it be obsolete by then?
  • increased scalability, speed and functionality at a lower cost

    A lower cost? Hell...maybe I'll pick one up after work today. With a price tag of only 24.5 million, you're actually making money with this purchase (or, as cases dictate, losing money by not taking advantage of this offer)!

    Sheesh. I think 'reduced' cost is more appropriate.

  • Cool (Score:2, Funny)

    by jhines0042 ( 184217 )
    All I can say is:

    "I have GOT to get me one of these!"
    -- Will Smith, "Independence Day"

    (42 Karma, don't mod me)
  • painting the football field...

    great good googly moogly.

    awesome, lets just hope it functions as it is designed to, could be a huge publicity boost for Linux....
  • and already 4 comments about a beowolf cluster of these things. cmon slashdot, you can do better than that
  • by psxndc ( 105904 ) on Wednesday April 17, 2002 @09:42AM (#3357975) Journal
    So THIS is what we'll need to run PerlBox. :-)

    psxndc


  • Let's see the story when they make one with 1,800 AMD processors!

    • Nonono, don't you remember? AMD is evil this week [slashdot.org].
    • by Indras ( 515472 ) on Wednesday April 17, 2002 @10:06AM (#3358192)
      Let's see the story when they make one with 1,800 AMD processors!

      Palo Alto, CA: In the news today, 26 researchers, who had been constructing a new super computer for the government running on 1,800 AMD processors, were killed today when they fired up the machine for a test run. Apparently, they had forgotten to turn on the water pumps for the computer's cooling system before starting up the computer. Thousands of megawatts of electricity were instantly turned into heat energy, resulting in a contained explosion that vaporized all the researchers instantly, and turned the building into a pile of melted plastic, metal, and concrete.

      One local, who wishes to remain unknown, said when interviewed, "It was crazy! I mean, the whole building just melted. The heat waves coming out of the building were staggering, it was all I could do just to run into the nearest air-conditioned Starbucks and catch my breath."
      • And if we built it out of old Cyrix chips, we'd melt a hole to the center of the earth. Cyrix chips run hotter than anything short of a V8 engine. When the fan on one of mine broke, it melted the foam between the cpu and the fan in seconds.
  • Sweet (Score:3, Funny)

    by ImaLamer ( 260199 ) <john.lamar@gma[ ]com ['il.' in gap]> on Wednesday April 17, 2002 @09:44AM (#3357990) Homepage Journal


    That answers my question of what I would have done if I won the Powerball last night

  • by gosand ( 234100 ) on Wednesday April 17, 2002 @09:47AM (#3358015)
    1.8 Terabytes of RAM

    So does that mean it has 3.6 Terabytes of swap space?

  • Insanely expensive (Score:4, Interesting)

    by Jeff Knox ( 1093 ) on Wednesday April 17, 2002 @09:48AM (#3358029) Homepage
    Wow, do the math thats $17,500 per processor (node). Thats 24.5 million divided by 1400. Whats the deal with that? Even with top of the line components, the fastest interconnects available (Dolphin or Myrinet or whatever), thats a 7 million dollar computer at most (5 grand a machine, with SCI could even build much faster then a 8Teraflop box, hell a dual Athlon or Intel based system would be cheaper and whale on that). Software? Nothing, althought they are probably going to use Scyld or something and pay the bucks. Im willing to bet that half that cost pure adminstrative and contract over head and support.
    • by Raleel ( 30913 ) on Wednesday April 17, 2002 @09:54AM (#3358089)
      AFAIK, 1/2 of the cost of each node is the interconnect, which has 1-3microsecond latency and gigabit bandwidth. The 24.5 million figure also includes a huge storage array on fibre channel (like 150 terabytes, I believe). And note, each node has 12 gig of ram.
      • And note, each node has 12 gig of ram.

        Sorry, chief, that should be 1.2GB of RAM.

        What all of your calculations don't include is the standard 200% markup that companies apply when supplying something custom-built to the gummit. Anyone remember the $7000.00 screwdriver ?
      • There are lots of options that can get you 1-3 microseconds of latency and 1 Gbit of bandwidth. Of course you'll likely have some serious software issues. Most programmers like using the good old IP stack to send and receive data. Trying to send and receive 100+ MB/s through an IP stack will pretty much hog the entire CPU. From my expericece with IP over fibrechannel, the throughput is CPU limited because of the IP stack. THere's no use in building a supercomputer if you're going to spend all the processing power just sending information around the interconnect. This means using Lightweight networking protocols, and writing a lot more custom software. Custom written software is very expensive since all the development costs aren't spread out across selling multiple coppies. This is likely one of those cases where a lot of it can be built with off the shelf hardware, but putting it together as a system is more than 90% of the cost. Fibre Channel is actually a good choice for this because you can run multiple protocols over the same network. Actually the company I work at has LINUX drivers that will do IP, a Lightprotocol, and IP simultaneously. We even have them for both X86 and PPC LINUX. We also have better multiple LUN and RAID support than other LINUX drivers I've seen. I wonder if we're involved in this. Maybe I'll go talk to our Fibre Channel Product Manager.
      • Each node only has 1256megs of ramwould be my guess. 1800gigs of ram today / 1400 processors. That comes out to be ~1.28gigs. The harddrives are also not that much a terabyte of drives is about 8000 grand using high quality scsi (okthey use fibre channel, close enough price bracket). Still doesnt account for 17.5 grand a processor. Thats a prett high cost per gflop rate.
    • Its for the package, not just the hardware. It could even include tax and shipping.
  • by popeyethesailor ( 325796 ) on Wednesday April 17, 2002 @09:52AM (#3358067)
    1) 8.4 TFLOPS lets you find the sum of 4.2+4.2, 168 trillion times a second.
    2) 170 TB can hold 42.5 thousand times the contents of the entire Library of Congress books .(+ all the MP3s you downloaded )
    3) 1 TB of RAM may let you run as many as 13 Windows applications simultaneously.
  • Effect on linux ? (Score:3, Interesting)

    by nilsj ( 266737 ) on Wednesday April 17, 2002 @09:52AM (#3358068)
    How will this affect linux ?
    Will HP come up with something revolutionary in linux development while constructing this system or is the tech used conventional - just on a bigger scale ?
    • Re:Effect on linux ? (Score:2, Interesting)

      by djbentle ( 553091 )
      The leader of the ASCI Blue program spoke at a graduate seminar I attended. He mentioned that when they first got AIX up and running on the over 6000 processors they found once a year bugs (a bug that in a normal implementation would only appear roughly once a year) at a rate of one every few days for quite a while. At the very least it ought to help flush out a lot of very rare bugs.

      David
  • by Christopher Thomas ( 11717 ) on Wednesday April 17, 2002 @09:53AM (#3358078)
    They're awfully confident of McKinley not following in the footsteps of Merced if they've placed this order.

    This raises an interesting question, though. If you want to build a high-performance compute cluster nowadays... what do you build it out of? The old answer, Alpha, doesn't really apply any more.

    Sun is optimized for communications bandwidth, not FLOPS, and I'm not sure if SGI even _offers_ machines that huge. HP is betting on IA64. And x86 is competely unsuitable, for memory space reasons if nothing else.

    What am I missing?
      • NUMA. Go look it up. :-)

        NUMA doesn't touch the address space problem, or the processor-type problem. It's just a way of arranging the memory hierarchy.
        • I believe that IBM's Power4 is still faster than anything else for serial execution by quite a large margin.

          Also, WRT address space, I would think memory access on these things is quite heavily abstracted for any userland tasks. When you reach outside of any one machine on the cluster, conventional memory access methods probably go out the window anyway. The ASCI Red was just a bunch of P6's soldered together after all, and it doesn't seem to be having too many problems.
          • Also, WRT address space, I would think memory access on these things is quite heavily abstracted for any userland tasks. When you reach outside of any one machine on the cluster, conventional memory access methods probably go out the window anyway. The ASCI Red was just a bunch of P6's soldered together after all, and it doesn't seem to be having too many problems.

            I seriously doubt that any x86 cluster uses a unified address space from any given task's point of view.

            Abstracting memory accesses could let you make your real address space a window into a larger one, but that would have some pretty nasty overhead.
    • IBM RS/6000's with Power4 cores? Since the Power3 is the dominant chip in the current top10, hell top20 I would guess it's bigger and better brother will be there soon. Also what about Sledgehammer could we see another mega cluster like ASCI Red(which is still #4 despite being based on PPro 150's)?
    • SGI (Score:2, Informative)

      by mapnjd ( 92353 )
      SGI certainly do sell machines with more processors than this: SGI ASCI Blue Mountain has 6144 CPUs

      Re: your less-than-insightful comment on x86: Intel's ASCI Red has 9472 x86 CPUs. Guess what - they don't share 4GB memory...

      Like the other poster said: look up NUMA.

      nic
      • Re: your less-than-insightful comment on x86: Intel's ASCI Red has 9472 x86 CPUs. Guess what - they don't share 4GB memory...

        *sigh*.

        If you're dealing with problems where you don't need to have the entire data set visible to all processors, great; use x86.

        If you need to map the entire address space, you need more than the 36 or so bits that x86 offers you.
    • McKinley was designed by HP rather than Intel as was Merced (aka Itanic), so it may actually perform well, considering HP's PA-RISC work.
    • If you wnat to learn more about 64-bit processors I suggest you read this article:

      Extreme Tech article [extremetech.com]
  • Supercomputer(s) (Score:3, Insightful)

    by totallygeek ( 263191 ) <sellis@totallygeek.com> on Wednesday April 17, 2002 @09:57AM (#3358118) Homepage
    The problem I have with calling these huge clusters supercomputers is that they really don't seem to fit the mold of the term. I prefer to call them supercomputing networks. When I think of a supercomputer, I am thinking of one entity that is hugely multi-processor or multi-boxed in an enclosure. These systems usually have matrixed processing technology and perform a specialized task for the hardware wrapped around them.

    I am impressed, however, with any of these clusters, and am amazed at the cost savings. But, you have other concerns with a huge cluster: redundancy, heat, energy usage, space requirements, etc.

    • Re:Supercomputer(s) (Score:2, Informative)

      by afidel ( 530433 )
      Since almost none of the top500 supercomputers are true old style supercomputer's I doubt many people would agree anymore. Vector supercomputers are expensive (hmm we'll have at least as much r&d as intel but spread it across a couple 10's of thousands of cpu instead of millions), They are finite in their expansion, and they are only extremely usefull for a handfull of problems (although they are the ones that most bother "normal" computers). Clusters of comodity systems using high performance interconnects has been the way HPC has been moving since the mid 90's, There are currently no traditional supercomputers in the top10, and there are only 4 in the top25. Redundancy is taken care of using scheduling algorithms that can handle a few lost nodes, heat, hehe well since the old Crays used liquid nitrogen cooling I doubt a cluster is any worse than any other solution, and yes space is a consideration, but most institutions that have HPC's will either build them their own building or will be removing an old supercomputer that does considerably less and takes up more room (remember computers keep getting more dense in terms of flops/sq ft).
      • The only problem with these "high performance interconnects" is that they're too generic to get clustered boxes working on any serious HPC codes. The dedicated high bandwidth/low latency interconnect in a Cray T3E (1994) still smokes any Myrinet (high bandwidth, high latency) serial interconnect linking commodity processors today.

        When you need your "supercomputer" to have more synergy than simply sharing the same power supply, people still go with traditional big iron like cray, sgi, ibm, and nec. These codes require tight coupling and sharing of data between nodes and processors and can't afford to spin NOP cycles as the latency over Myrinet kills their performance.

        Yes, clusters do run some codes extremely well. These are the ones that don't require much node-to-node communication and only use the interconnect to setup the executables locally. RC5 and SETI would be a great example of these. But what if a RC5 key depended on the answer to the results of a key from its nearest 8 neighbors? Suddenly the clustered interconnects are swamped and REAL performance becomes a low percentage of peak.
  • by Komarosu ( 538875 ) <nik_doof@ni3.14159kdoof.net minus pi> on Wednesday April 17, 2002 @10:04AM (#3358173) Homepage

    "8.3 Trillion Floating Point Operations per Second, 1.8 Terabytes of RAM, 170 Terabytes of disk, (including a 53 TB SAN), and 1400 Intel McKinley and Madison Processors."


    Microsoft finally release the baseline specifications for there next generation operating system...

  • by qurob ( 543434 ) on Wednesday April 17, 2002 @10:09AM (#3358213) Homepage

    What about this one? [wired.com]

    3:00 a.m. March 22, 2000 PST
    The University of New Mexico and IBM are teaming up to build the world's fastest Linux-based supercomputer.

    Named "LosLobos", the new supercomputer is scheduled to be fully operational by the summer


    Whats the current status?
  • What's "a computer" (singular)? The "details" links are a little short. 1,400 processors, wow. How many kernels? 1? 1,400? What's the topography? Will it use resources completely dynamically, or can you split it into fixed side sub-units? If you can hot swap parts, can you turn off e.g. half of it and still feed the other half problems? Are various parts of it drawing from independend power sources? Is there a single point of control, or are there multiple master processes?

    What I'm getting at is: at what point does a multiple processor "supercomputer" start to be indistinguishable from a "distributed computing network". Imagine a Beowulf cluster of SETI@home networks, for example. ;-)


  • Most supercomputers have been using Unix (and the many varients thereof) for a long time. Unix has always seemed to be able to handle multiple processors efficently. This is just the rich man's version of a beowulf cluster :)
  • 'Open' (Score:4, Insightful)

    by Ed Avis ( 5917 ) <ed@membled.com> on Wednesday April 17, 2002 @10:42AM (#3358425) Homepage
    'Open architectures'? But it's going to be running Intel's proprietary IA-64 family, where the USPTO has even granted patents on certain CPU instructions. H-P's claim would ring more true if they'd gone with IA-32 (which has two competing suppliers, at least) or SPARC (which you can license from some half-baked consortium).

    Unfortunately there is no fully open hardware platform at the moment, and closed hardware is less of a problem than closed software, but still this sounds like marketspeak.
  • "This will be one of the top ten fastest computers in the world."

    Anyone else find it amusing that the link to the top 10 fastest computers in the world appears to be slashdotted?

    Pib.
  • Imagine a Beowolf cluster of these babies
  • The worlds largest supercomputer is being built as we speak at various campuses around the world. Its a multipart system with various clusters linked together at the different campuses. If your interested I covered the basics of the system below.

    TeraGrid is the name of the soon to be world's largest computing cluster that will be completed in 2002. It will contain approximately 3,300 Itanium(TM) and McKinley processors on IBM servers running Linux connected through a Qwest fiber-optic network. Once completed the TeraGrid will be capable of a massive 13.6 teraflops and will have access to 450-600 terabytes of data.
    This is a huge step (for Intel at least) in acceptance of the Itanium processor into the server market. Intel is fueling the program by providing optimized compilers and software as well as various customized tools.

    It is being funded by the National Science Foundation by a $53million grant. Various researchers will have access to the system to perform a variety of simulations. Possible uses include :

    -Molecular modeling for disease detection
    -Drug discovery
    -Automobile crash simulations
    -Climate and atmospheric simulations
    -any other approved scientific research purposes

    The TeraGrid will be unique because it will link together various computing clusters at different locations rather than host them all at the same location. Globus is providing open-source protocols that will determine how the grids will communicate with each other. These open-source protocols will create a "plug-n-play" type effect where more machines could easily be added to the network.

    The largest section of the TeraGrid will be hosted at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign. There will also be portions of the TeraGrid at the University of California San Diego, Argonne National Laboratory, and the California Institute of Technology.
  • shows how HP has worked to help accelerate the shift from proprietary platforms to open architectures

    Last I checked, only intel made itanium architecture chips, chipsets and firmware, and all the machines are intel reference designs. How is this not a proprietary platform again?

    Even Sparc is less proprietary then this. It's unfortunate that intel and HP can blatently lie, and people will eat it up.
  • It may have fancy hardware, but is it any good in a fight [theonion.com]?

  • How long before distributed computing networks such as those used in the projects by United Devices, SETI@Home and KaZaA :-P are included in the supercomputing list?

  • ...will it run KDE or Gnome ???

  • that's it
  • I thought Microsoft and Unisys had all the answers for high-performance computing.
  • So, will they have to buy 1400 Windows licenses then throw them in the trash, like the rest of us do?

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...