With Linux Clusters, Seeing Is Believing 208
Roland Piquepaille writes "As the recent release of the last Top500 list reminded us last month, the most powerful computers now are reaching speeds of dozens of teraflops. When these machines run a nuclear simulation or a global climate model for days or weeks, they produce datasets of tens of terabytes. How to visualize, analyze and understand such massive amounts of data? The answer is now obvious: using Linux clusters. In this very long article, "From Seeing to Understanding," Science & Technology Review looks at the technologies used at Lawrence Livermore National Laboratory (LLNL), which will host the IBM's BlueGene/L next year. Visualization will be handled by a 128- or 256-node Linux cluster. Each node contains two processors sharing one graphic card. Meanwhile, the EVEREST built by Oak Ridge National Laboratory (ORNL), has a 35 million pixels screen piloted by a 14-node dual Opteron cluster sending images to 27 projectors. Now that Linux superclusters have almost swallowed the high-end scientific computing market, they're building momentum in the high-end visualization one. The article linked above is 9-page long when printed and contains tons of information. This overview is more focusing on the hardware deployed at these two labs."
Re:Mac OS X has similar benefits (Score:4, Informative)
Re:Mac OS X has similar benefits (Score:5, Informative)
By contrast, NCSA's surprise entry in November 2003's list, Tungsten, achieved 9.82 Tflops for $12M asset cost.
When I looked here [uiuc.edu], I found this: ``Tungsten entered production mode in Novermber 2003 and has a peak performance of 15.36 teraflops (15.36 trillion calculations per second).''
To me, that looks faster than System X, not slower.
Let's see: NCSA stands for ``National Center for Supercomputing Applications''. ``NCSA [uiuc.edu] is a key partner in the National Science Foundation's TeraGrid project, a $100-million effort to offer researchers remote access ...''
Looks as if the NCSA has a huge budget. I'd guess that ``gold-plated everything'' and ``leave no dollars unspent'' are basic specs for everythig they buy.
What can we learn about Virginia Tech? How about this [vt.edu]:
In addition to the volunteer labor, I'd guess that Virginia Tech had very different design goals, in which price was a factor. NCSA's bureaucracy probably accounted for a lot of those extra $6M they spent. Different designs and goals probably had a lot to do with the rest of the price, but I suspect that a bureaucratic procurement process was the main cause for the higher price of the Xeon system.Yes, System X and the Apple hardware is pretty neat, but don't use the price/performance ratio of these two systems as a metric for the relative worth of Linux and OSX clusters.
It's unfair and meaningless to compare volunteer labor and academic pricing and scrounging on a limited budget to bureaucratic design, bureaucratic procurement and an unlimited budget.
Re:Roland Piquepaille and Slashdot (Score:3, Informative)
Are you actually Roland Piquepaille? If so, that's a really neat trick to move traffic to that site. If not, then he may be thankful for your comment, after all
You would think so (Score:3, Informative)
Re:Big Screen! (Score:3, Informative)
Here's what you'll need at minimum:
If you've built a large setup (where "large" means "more than eight screens"), the openGL performance will suffer. In that case you can also install Chromium [sourceforge.net] which can work with DMX to provide a more efficient path for the openGL commands. [The DMX glx proxy broadcasts the gl commands to all nodes, Chromium can provide a tile sort that only sends the gl calls to the appropriate nodes.]
Assuming you can get all the above running, there's still plenty of work. Just keeping eight projectors color balanced will eat up a few hours of your week. If you want to do frame-locked stereo on your power wall, things get even more complex (and expensive -- nvidia 3000G/4400G cards aren't typically in the discount bin at Fry's).
Have fun, openGL stuff looks really cool on powerwalls... :-)
Re:Building clusters with linux is easy. (Score:2, Informative)
When VLSI hit the market, it became cheaper to have one ultrapowerful machine, compared to having a cluster of older IC-based hardware. You got more firepower for the money. That's not to say it wouldn't still pay to combine multiple Nth Generation machines, but a great deal of the cost advantage would be lost.
Clusters exist in their current diversity because it is simply the cheapest and most effective way to create powerful supercomputers. If you have a new technology orders of magnitude more powerful (which is how it usually goes), but also considerably more expensive, it becomes cheaper to build a singular (or small number) of powerful specimins than it does to build legions of older technology (like current processors - they aren't that powerful compared to higher end chips, but they're much much cheaper).
You could always network a whole mess of next-generation processors, but while it's a newer technology it will be obscenely expensive (not counting cost, there's nothing to stop people from creating arrays of supercomputer clusters right now).
Re:Mac OS X has similar benefits (Score:4, Informative)
Fan control was integrated into the kernel over a month ago, and is most definitelly in the first version we released last week.
We have also developed a nice pretty installer for the head node in a cluster, and wrote Y-Imager (front end for Argonne's System Imager), to automate the building of compute nodes in a cluster.
No offense taken
Re:You would think so (Score:2, Informative)