Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

With Linux Clusters, Seeing Is Believing 208

Roland Piquepaille writes "As the recent release of the last Top500 list reminded us last month, the most powerful computers now are reaching speeds of dozens of teraflops. When these machines run a nuclear simulation or a global climate model for days or weeks, they produce datasets of tens of terabytes. How to visualize, analyze and understand such massive amounts of data? The answer is now obvious: using Linux clusters. In this very long article, "From Seeing to Understanding," Science & Technology Review looks at the technologies used at Lawrence Livermore National Laboratory (LLNL), which will host the IBM's BlueGene/L next year. Visualization will be handled by a 128- or 256-node Linux cluster. Each node contains two processors sharing one graphic card. Meanwhile, the EVEREST built by Oak Ridge National Laboratory (ORNL), has a 35 million pixels screen piloted by a 14-node dual Opteron cluster sending images to 27 projectors. Now that Linux superclusters have almost swallowed the high-end scientific computing market, they're building momentum in the high-end visualization one. The article linked above is 9-page long when printed and contains tons of information. This overview is more focusing on the hardware deployed at these two labs."
This discussion has been archived. No new comments can be posted.

With Linux Clusters, Seeing Is Believing

Comments Filter:
  • by Spy Hunter ( 317220 ) on Monday December 13, 2004 @01:15PM (#11073293) Journal
    I think you missed something here in your rush to defend Apple. The article is not about building high-teraflop supercomputers; it is about using small-to-medium sized clusters of commodity hardware to run high-end visualization systems (with Linux's help of course). Since they specifically want top-of-the-line graphics cards in these machines, Macs would not be the best choice. PCs have PCI express now (important for nontraditional uses of programmable graphics cards, as these guys are probably doing) and the latest from ATI/NVidia is always out first on PCs, cheaper.
  • by RealAlaskan ( 576404 ) on Monday December 13, 2004 @01:19PM (#11073334) Homepage Journal
    Virginia Tech's "System X" cluster cost a total of $6M for the asset alone (i.e., not including buildings, infrastructure, etc.), for performance of 12.25 Tflops.

    By contrast, NCSA's surprise entry in November 2003's list, Tungsten, achieved 9.82 Tflops for $12M asset cost.

    When I looked here [uiuc.edu], I found this: ``Tungsten entered production mode in Novermber 2003 and has a peak performance of 15.36 teraflops (15.36 trillion calculations per second).''

    To me, that looks faster than System X, not slower.

    Let's see: NCSA stands for ``National Center for Supercomputing Applications''. ``NCSA [uiuc.edu] is a key partner in the National Science Foundation's TeraGrid project, a $100-million effort to offer researchers remote access ...''

    Looks as if the NCSA has a huge budget. I'd guess that ``gold-plated everything'' and ``leave no dollars unspent'' are basic specs for everythig they buy.

    What can we learn about Virginia Tech? How about this [vt.edu]:

    System X was conceived in February 2003 by a team of Virginia Tech faculty and administrators and represents what can happen when the academic and IT organizations collaborate.

    Working closely with vendor partners, the Terascale Core Team went from drawing board to reality in little more than 90 days! Building renovations, custom racks, and a lot of volunteer labor had to be organized and managed in a very tight timeline.

    In addition to the volunteer labor, I'd guess that Virginia Tech had very different design goals, in which price was a factor. NCSA's bureaucracy probably accounted for a lot of those extra $6M they spent. Different designs and goals probably had a lot to do with the rest of the price, but I suspect that a bureaucratic procurement process was the main cause for the higher price of the Xeon system.

    Yes, System X and the Apple hardware is pretty neat, but don't use the price/performance ratio of these two systems as a metric for the relative worth of Linux and OSX clusters.

    It's unfair and meaningless to compare volunteer labor and academic pricing and scrounging on a limited budget to bureaucratic design, bureaucratic procurement and an unlimited budget.

  • by maxwell demon ( 590494 ) on Monday December 13, 2004 @01:30PM (#11073438) Journal
    Roland Piquepaille's Technology Trends serves online advertisements through a service called Blogads, located at www.blogads.com. [...] Blogads pays a flat fee based on the level of traffic your online journal generates. [...] Visit Roland Piquepaille's Technology Trends (www.primidi.com) to see it for yourself.

    Are you actually Roland Piquepaille? If so, that's a really neat trick to move traffic to that site. If not, then he may be thankful for your comment, after all :-)
  • You would think so (Score:3, Informative)

    by jellomizer ( 103300 ) * on Monday December 13, 2004 @01:35PM (#11073479)
    Unless you change the settings so it is compiling mulible applications at the same time. The speed to install Stage 1 of Gentoo won't be much faster then a 2 maybe 4 CPU system. These super computers and clusters use a concept called Parallel Processing. It is a process where a task is broken up and are handled by many processors in parallel. Most applications are not designed to run in parallel. So unless you have a compiler that is designed with Parallel Processing the OS will give the compiling task to 1 processor to processes out. You may get a slight speed advantage because the OS resources are being handled by an other processor but you are not guanrenteed 2x performace with 2 processors. Espectially with most make scripts for application you compile one program when that is done you do the next one. There are some algroithms that can done very Well (orders of magintude less) on Parallel Processing and there are other algorithms that just cannot be parallelized. Having 2 1 Ghz Processors is not the same as having 1 2 Ghz Processor. The 2 1Ghz will probably handle load much better then the 1 2ghz processor but the 1 2ghz processor will probably run your game better.
  • Re:Big Screen! (Score:3, Informative)

    by dsouth ( 241949 ) on Monday December 13, 2004 @01:39PM (#11073521) Homepage
    A 35 million pixel screen would rock for Half-Life 2. Where can I get me one? Looking at the picture, it's kind of like 3 monitors stuck together, so maybe I'll save some money and only get 1/3rd of the setup. How much can that cost? I mean, really.
    I know you're joking, but since I'm the hardware architect for the LLNL viz effort, I'll bite anyway. :-)

    Here's what you'll need at minimum:

    • A lot of display devices (monitors, projectors, whatever)
    • Sufficient video cards to drive the above (with new cards, you could do 2 devices per card if you have the appropriate cards, X configs, and the like).
    • A sufficient number of nodes to run the cards.
    • The fastest interconnect you can afford.
    Once you've assembled the above, you connect everything up, install your favorite Linux or BSD distro on each node, then install DMX [sourceforge.net]. DMX works as an X11 proxy. It dispatches the X calls to other X11 servers on the appropriate nodes, giving the illusion that they are all one big X11 server. It also proxies for glX, so openGL stuff should run correctly.

    If you've built a large setup (where "large" means "more than eight screens"), the openGL performance will suffer. In that case you can also install Chromium [sourceforge.net] which can work with DMX to provide a more efficient path for the openGL commands. [The DMX glx proxy broadcasts the gl commands to all nodes, Chromium can provide a tile sort that only sends the gl calls to the appropriate nodes.]

    Assuming you can get all the above running, there's still plenty of work. Just keeping eight projectors color balanced will eat up a few hours of your week. If you want to do frame-locked stereo on your power wall, things get even more complex (and expensive -- nvidia 3000G/4400G cards aren't typically in the discount bin at Fry's).

    Have fun, openGL stuff looks really cool on powerwalls... :-)

  • by LithiumX ( 717017 ) on Monday December 13, 2004 @01:41PM (#11073538)
    It all depends on what form an advance takes.

    When VLSI hit the market, it became cheaper to have one ultrapowerful machine, compared to having a cluster of older IC-based hardware. You got more firepower for the money. That's not to say it wouldn't still pay to combine multiple Nth Generation machines, but a great deal of the cost advantage would be lost.

    Clusters exist in their current diversity because it is simply the cheapest and most effective way to create powerful supercomputers. If you have a new technology orders of magnitude more powerful (which is how it usually goes), but also considerably more expensive, it becomes cheaper to build a singular (or small number) of powerful specimins than it does to build legions of older technology (like current processors - they aren't that powerful compared to higher end chips, but they're much much cheaper).

    You could always network a whole mess of next-generation processors, but while it's a newer technology it will be obscenely expensive (not counting cost, there's nothing to stop people from creating arrays of supercomputer clusters right now).
  • by zapp ( 201236 ) on Monday December 13, 2004 @02:05PM (#11073764)
    YDL is not intended to run on a G5 cluster, unless you had Y-HPC. YDL on its own is only 32-bit.

    Fan control was integrated into the kernel over a month ago, and is most definitelly in the first version we released last week.

    We have also developed a nice pretty installer for the head node in a cluster, and wrote Y-Imager (front end for Argonne's System Imager), to automate the building of compute nodes in a cluster.

    No offense taken :)
  • by roxtar ( 795844 ) on Monday December 13, 2004 @02:16PM (#11073874) Homepage Journal
    I assume you haven't heard of this: distcc [samba.org] It does improve speed during compiling. Gentoo can be compiled using distcc and there is even a how to on the gentoo documentation page.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...