Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

With Linux Clusters, Seeing Is Believing 208

Roland Piquepaille writes "As the recent release of the last Top500 list reminded us last month, the most powerful computers now are reaching speeds of dozens of teraflops. When these machines run a nuclear simulation or a global climate model for days or weeks, they produce datasets of tens of terabytes. How to visualize, analyze and understand such massive amounts of data? The answer is now obvious: using Linux clusters. In this very long article, "From Seeing to Understanding," Science & Technology Review looks at the technologies used at Lawrence Livermore National Laboratory (LLNL), which will host the IBM's BlueGene/L next year. Visualization will be handled by a 128- or 256-node Linux cluster. Each node contains two processors sharing one graphic card. Meanwhile, the EVEREST built by Oak Ridge National Laboratory (ORNL), has a 35 million pixels screen piloted by a 14-node dual Opteron cluster sending images to 27 projectors. Now that Linux superclusters have almost swallowed the high-end scientific computing market, they're building momentum in the high-end visualization one. The article linked above is 9-page long when printed and contains tons of information. This overview is more focusing on the hardware deployed at these two labs."
This discussion has been archived. No new comments can be posted.

With Linux Clusters, Seeing Is Believing

Comments Filter:
  • by Anonymous Coward on Monday December 13, 2004 @12:46PM (#11073032)
    I know this is a stupid question, but what exactly is a Teraflop? The first thing that comes to mind is someone doing a belly flop and hitting concrete...
  • Re:Uh huh ... (Score:3, Insightful)

    by superpulpsicle ( 533373 ) on Monday December 13, 2004 @12:55PM (#11073136)
    Sigh... another jealous M$ fanboy who hates linux cause his career relies on running windows and clusterpatchupdate.exe.

  • by Anonymous Coward on Monday December 13, 2004 @12:57PM (#11073152)
    So, if I've got this straight, Slashdot drives the banner ad traffic, real journalists write the content, and all Roland has to do is rip off a few articles, then sit in the middle and collect the checks. How do I get a sweet gig like that?
  • Really... (Score:5, Insightful)

    by grahamsz ( 150076 ) on Monday December 13, 2004 @01:09PM (#11073249) Homepage Journal
    Now that Linux superclusters have almost swallowed the high-end scientific computing market...

    While some simulations parallelize very well to cluster environments, there are still plenty tasks that don't split up like that.

    The reason clusters make up a lot of the Top 500 list is that they are relatively cheap and you can make them faster by adding more nodes - whereas traditional supercomputers need to be deisgned from the ground up.
  • by zapp ( 201236 ) on Monday December 13, 2004 @01:15PM (#11073296)
    G5 nodes do have excellent performance, but don't assume OSX is all they can run.

    We at Terra Soft have just released Y-HPC, our version of Yellow Dog Linux, with a full 64-bit development environment, and a bunch of cluster tools built in.

    I'm not much of a marketting drone, but being as I am part of the Y-HPC team, I had to put a shameless plug in. Bottom line is, it kicks OSX's ass any 2 ways you look at it.

    Y-HPC [terrasoftsolutions.com]
  • by RazzleFrog ( 537054 ) on Monday December 13, 2004 @01:19PM (#11073333)
    Beside the fact that you are (please forgive me) Apples and Oranges, your sample size is way too small to use as conclusive evidence. Until we start seeing X Serve Clusters in a few more places we can't be sure of the cost benefit.
  • by vsack ( 558342 ) on Monday December 13, 2004 @01:23PM (#11073374)
    You have to take the costs with a grain of salt. They built the original machine for $5.2M. They then upgraded all the nodes from PowerMac G5s to Xserve G5s for $600K. Even if you assume that the $5.2 was a fair price for their original system, the upgrade price was an absolute gift from Apple. The cost per node to upgrade was about $550. Since they moved from non-ECC RAM to ECC RAM (4GB/node), the memory upgrade should have cost more than that alone.

    Vendors will often give away hardware in order to break into a new market. This is incredible marketing for Apple. Who cares if they eat a few million for the press they've gotten?
  • by roxtar ( 795844 ) on Monday December 13, 2004 @01:27PM (#11073412) Homepage Journal
    But on the other hand problems which require immense amount of calculations will exist and I don't see how advances in VLSI or some other technology will eliminate these kind of problems. So what I actually believe is that to some extent, yes we may go back to singular cores but imagine the power of those single cores together. In my opinion even if new technology does arrive, clusters are here to stay.
  • by Anonymous Coward on Monday December 13, 2004 @01:28PM (#11073418)
    I bet that NCSA actually ran something though. That's something that the VT machine never really appeared to do...
  • Rpeak, not Rmax (Score:5, Insightful)

    by daveschroeder ( 516195 ) * on Monday December 13, 2004 @01:29PM (#11073427)
    Look here [top500.org].

    The speed you quoted is the theoretical peak, not the actual maximum achieved in a real world calculation (like the Top 500 organization's use of Linpack).

    System X's equivalent theoretical peak is 20.24 TFlops.

    I'm also not indicting Linux clusters in the least; they've clearly shown they can outperform traditionally architected and constructed supercomputers for many tasks, with the benefit of using commodity parts - at commodity pricing. All I'm saying is that there's a new player here, and it's a real contender, and has done a lot for very little money...which was the whole goal of Linux clusters in this realm in the first place.

    (Also, as I said, the volunteer labor model is irrelevant - let's just pretend it was professionally installed for an additional $1M, or even $2M if that would satisfy you. It's still several million dollars cheaper, and 3Tflops greater performance. These are BOTH rackmount clusters with similar amounts of nodes and processors, running a commodity OS with fast interconnects. There are differences, yes, and perhaps even differences in goals. But looking past that, price/performance for something like this is still an important metric.)
  • by 59Bassman ( 749855 ) on Monday December 13, 2004 @01:38PM (#11073514) Journal
    Truly no offense intended, but...

    I've tried installing YDL on a small G5 cluster. It was a PITA to get running (3 installs before I was able to get the X server running right). And still I can't find any fan control. After 5 minutes the fans spool up to "ludicrous speed" and stick there.

    I really want to like YDL. I've been talking to the folks who do OSCAR about trying to get OSCAR to support YDL. But I'm not sure how it will work out yet, at least until I can figure out how to turn down the fans!

  • by saha ( 615847 ) on Monday December 13, 2004 @01:41PM (#11073539)
    Clusters are proven to be cost effective, but they do require more labor to optimize code to get it to work in that environment. Its easier to have the system and the complier do the work for you in a single image system. This article address those issues and concerns. single image shared vs distributed memory in large Linux systems [newsforge.com]
  • by Sai Babu ( 827212 ) on Monday December 13, 2004 @01:58PM (#11073695) Homepage
    IMO, computer aided visualization is over rated. Sure, it's good in a production environment but the mental effort of visualization is a tremendous aid to imagination. There's no way to computerize epiphany.
  • by downbad ( 793562 ) on Monday December 13, 2004 @02:21PM (#11073916)
    Some of Slashdot's editors are millionaires. I highly doubt that they would be sweating $647.
  • by hackstraw ( 262471 ) * on Monday December 13, 2004 @02:51PM (#11074275)
    I know there will be a dozen predictable responses to this, deriding System X, Virginia Tech, Apple, Mac OS X, linpack, Top 500, and coming up with one excuse after another. But won't anyone consider the possibility that these Mac OS X clusters are worth something?

    Your right!

    1st, System X or the "Big Mac" was thrown together so that people like us would talk about it and to get a good standing for the November 2003 top 500 list. They did an excellent job at this.

    Now for some reality. The system is not yet operational.

    When it was first thown together, everyone "in the know" and myself questioned how this was going to work without a reliable memory subsystem, and the VT people responded that they were going to write software to correct any hardware errors, and we said OK, whatever. Then, they said, hmm, we kinda needa a reliable memory subsystem, so lets rip out all 1,100+ machines and start over with these new Xserve boxes that have ECC memory in them.

    This system has not come up yet with the new Xserves, according to their website [vt.edu].

    Now, I'm going to make a comment on Linpack. Linpack, like all good benchmarks are really good at measuring that benchmark's performance. Linpack is a good benchmark, but it is also a benchmark that does not require much RAM per node to run. Some applications do need a good amount of RAM/node to run and being that RAM costs $$, the cost adds up very quickly, and the cost/cpu/teraflop goes down accordingly.

    With the comparison between System X and Tungsten NCSA cluster. Personally, I don't know why the Tungsten cluster cost more because the Mac cluster has more RAM/node and each node should have been cheaper in general. The NCSA cluster uses Myrinet which I know is expensive, but I do not know that in comparison to the Infiniband equipment on the Macs. Supposedly, the Infiniband interconnects were what got System X on the top500 list with such good results, or at least that is what the head of the project told me.

    Although its popular here on slashdot because many of the readers are younger and inexperienced (and have no money) that they praise anything that costs less and extra brownie points go towards an underdog like AMD or Linux, however in the real world people actually will pay extra for something to ensure that it works. Working equipment may seem superfluous to the dorm room Linux guru, but trust me, I know what its like to work with equipment that cost about $1 mil and it doesn't work. We could have gone with the 2nd bidder at $1.2 mil and it would have worked. Yes, we "saved" $200,000, but we also wasted well over $500,000 when one considers that over 50% of the equipment is faulty and many people's time has been wasted.
  • by vsack ( 558342 ) on Monday December 13, 2004 @03:26PM (#11074649)
    Doesn't that assume that /. readers RTFA?

    We all know that's not true.

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...