Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

With Linux Clusters, Seeing Is Believing 208

Roland Piquepaille writes "As the recent release of the last Top500 list reminded us last month, the most powerful computers now are reaching speeds of dozens of teraflops. When these machines run a nuclear simulation or a global climate model for days or weeks, they produce datasets of tens of terabytes. How to visualize, analyze and understand such massive amounts of data? The answer is now obvious: using Linux clusters. In this very long article, "From Seeing to Understanding," Science & Technology Review looks at the technologies used at Lawrence Livermore National Laboratory (LLNL), which will host the IBM's BlueGene/L next year. Visualization will be handled by a 128- or 256-node Linux cluster. Each node contains two processors sharing one graphic card. Meanwhile, the EVEREST built by Oak Ridge National Laboratory (ORNL), has a 35 million pixels screen piloted by a 14-node dual Opteron cluster sending images to 27 projectors. Now that Linux superclusters have almost swallowed the high-end scientific computing market, they're building momentum in the high-end visualization one. The article linked above is 9-page long when printed and contains tons of information. This overview is more focusing on the hardware deployed at these two labs."
This discussion has been archived. No new comments can be posted.

With Linux Clusters, Seeing Is Believing

Comments Filter:
  • by daveschroeder ( 516195 ) * on Monday December 13, 2004 @12:43PM (#11073008)
    Virginia Tech's "System X" cluster cost a total of $6M for the asset alone (i.e., not including buildings, infrastructure, etc.), for performance of 12.25 Tflops.

    By contrast, NCSA's surprise entry in November 2003's list, Tungsten, achieved 9.82 Tflops for $12M asset cost.

    Double the cost, for a Top 100 supercomputer's-worth lower performance.

    And it wasn't because Virginia Tech had "free student labor": it doesn't take $6M in labor to assemble a cluster. Even if we give it an extremely, horrendously liberal $1M for systems integration and installation, System X is still ridiculously cheaper.

    I know there will be a dozen predictable responses to this, deriding System X, Virginia Tech, Apple, Mac OS X, linpack, Top 500, and coming up with one excuse after another. But won't anyone consider the possibility that these Mac OS X clusters are worth something?
  • by Anonymous Coward on Monday December 13, 2004 @12:46PM (#11073033)
    Roland Piquepaille and Slashdot: Is there a connection?

    I think most of you are aware of the controversy surrounding regular Slashdot article submitter Roland Piquepaille. For those of you who don't know, please allow me to bring forth all the facts. Roland Piquepaille has an online journal (I refuse to use the word "blog") located at www.primidi.com [primidi.com]. It is titled "Roland Piquepaille's Technology Trends". It consists almost entirely of content, both text and pictures, taken from reputable news websites and online technical journals. He does give credit to the other websites, but it wasn't always so. Only after many complaints were raised by the Slashdot readership did he start giving credit where credit was due. However, this is not what the controversy is about.

    Roland Piquepaille's Technology Trends serves online advertisements through a service called Blogads, located at www.blogads.com. Blogads is not your traditional online advertiser; rather than base payments on click-throughs, Blogads pays a flat fee based on the level of traffic your online journal generates. This way Blogads can guarantee that an advertisement on a particular online journal will reach a particular number of users. So advertisements on high traffic online journals are appropriately more expensive to buy, but the advertisement is guaranteed to be seen by a large amount of people. This, in turn, encourages people like Roland Piquepaille to try their best to increase traffic to their journals in order to increase the going rates for advertisements on their web pages. But advertisers do have some flexibility. Blogads serves two classes of advertisements. The premium ad space that is seen at the top of the web page by all viewers is reserved for "Special Advertisers"; it holds only one advertisement. The secondary ad space is located near the bottom half of the page, so that the user must scroll down the window to see it. This space can contain up to four advertisements and is reserved for regular advertisers, or just "Advertisers". Visit Roland Piquepaille's Technology Trends (www.primidi.com [primidi.com]) to see it for yourself.

    Before we talk about money, let's talk about the service that Roland Piquepaille provides in his journal. He goes out and looks for interesting articles about new and emerging technologies. He provides a very brief overview of the articles, then copies a few choice paragraphs and the occasional picture from each article and puts them up on his web page. Finally, he adds a minimal amount of original content between the copied-and-pasted text in an effort to make the journal entry coherent and appear to add value to the original articles. Nothing more, nothing less.

    Now let's talk about money. Visit http://www.blogads.com/order_html?adstrip_category =tech&politics= [blogads.com] to check the following facts for yourself. As of today, December XX 2004, the going rate for the premium advertisement space on Roland Piquepaille's Technology Trends is $375 for one month. One of the four standard advertisements costs $150 for one month. So, the maximum advertising space brings in $375 x 1 + $150 x 4 = $975 for one month. Obviously not all $975 will go directly to Roland Piquepaille, as Blogads gets a portion of that as a service fee, but he will receive the majority of it. According to the FAQ [blogads.com], Blogads takes 20%. So Roland Piquepaille gets 80% of $975, a maximum of $780 each month. www.primidi.com is hosted by clara.net (look it up at http://www.networksolutions.com/en_US/whois/index. jhtml [networksolutions.com]). Browsing clara.net's hosting solutions, the most expensive hosting service is their Clarahost Advanced (http://ww [clara.net]
  • by roxtar ( 795844 ) on Monday December 13, 2004 @01:08PM (#11073233) Homepage Journal
    To reaffirm what the article said building linux clusters is very simple. In fact certain distributions such as bccd [uni.edu] and cluster knoppix [bofh.be] specifically for that. Although configuring clustering softwares such as pvm mpi lam mosix etc wouldn't be a problem, I prefer something which has almost everything build into one package thats why I like the above distros. In fact I built a cluster (using BCCD) at home and used it to render images built from povray [povray.org]. I used pvmpov [sourceforge.net] for the rendering on a cluster part. Although there were only four machines the speed difference was evident. And above all making clusters is extremely cool and shows the paradigm shift towards parallel computing.
  • very long article... (Score:4, Interesting)

    by veg_all ( 22581 ) on Monday December 13, 2004 @01:08PM (#11073238)
    So now Monsieur Piquepaille has been shamed by scornful posters [tinyurl.com] into including a link to the actual article (instead of harvesting page views), but he'd still really, really like you to click through to his page....
  • by LithiumX ( 717017 ) on Monday December 13, 2004 @01:20PM (#11073344)
    I do think clusters are going to be a dominant architecture for the next few decades, but I also think the current ultra-heavy emphasis on clusters is as much a function of asymptotic limitations as much as the natural evolution of the technology. It's currently cheaper to build a cluster out of a whole mess of weaker processors than it is to develop a single ubercore. I doubt that situation will last more than a decade, though, going by previous history.

    Computers were initially monolithic machines that effectively had a single core. By the 70's, the processing on many mainframes had branched out so that a single mainframe was often a number of seperate systems integrated into a whole (though nothing on the level we see today). By the 80's it seemed to swing back to monolithic designs (standalone pc's, ubercomputer Crays) and it wasn't until the 90's that dual and quad processing became commonplace (though the technology had existed before).

    Eventually, someone will hit on a revolutionary new technology (sort of like how transistors, IC's, and microprocessors were revoloutionary) that renders current LVSI systems obsolete (optical? quantum?), and the cost/power ratio will shift dramatically, making it more economical to go back to singular (and more expensive) powerful cores rather than cheap (but weaker) distributed cores.
  • by chris mazuc ( 8017 ) on Monday December 13, 2004 @01:37PM (#11073502)
    30 accepted stories since August 29th, 2004! [slashdot.org] Wtf is going on here?
  • by daveschroeder ( 516195 ) * on Monday December 13, 2004 @01:38PM (#11073504)
    ...except get untold amounts of recognition, publicity, free advertising, news articles, and the capability to catapult themselves to the forefront of the supercomputing community overnight for a paltry sum of money, thus attracting millions of dollars of additional funding and grants to build clusters that WILL be doing real work, such as the one we're talking about now (which is more than capable now that it has ECC memory), and the several additional clusters they plan to build in the future, not to mention the benefit of proving that a new architecture, interconnect, and OS will perform well as a supercomputer, allowing more choice, competition, and innovation to enter the scene, which ultimately results in more and better choices for everyone.
  • by LordMyren ( 15499 ) on Monday December 13, 2004 @03:46PM (#11074860) Homepage
    PCI-E has symmetric bandwidth. Current generation graphics cards will undoubtedly not be able to take advantage of this feature, they've spent so long getting data to the graphics card that thats all they're optimized for, but in the long run this has some crucial implications.

    Namely, it allows for graphics cards to operate better in situations exactly like this; clustered applications. As it stands, the graphics card can crunch an enormous amount of data, but is extremely poor at sending it back to the CPU & system. It's optimized for screen dumping only.

    Sony's Cell is going to be absolutely crucial as a tech demo for this foresighted technology. We're heading towards a more distributed computer architecture where various specialized units pipe data between each other.

    In summation,
    Its my hope that eventually graphics cards will catch up and perform better bi-directionally. After that, we've got to wait another 5 years for PCI-E implementations to catch up and perform better switching (vis-a-vise multiple fully-switched x16 busses). We are moving away from the CPU for high performance computing; the cpu currently performs both control and data-processing. Graphics cards are just the first wave of the distributed architecture phenomena, Cell will be a light-year jump towards the future of computing in the intricate levels of hardware reconfigurability. there's a good powerpoint on the patents behind cell here [unc.edu].

    Ultimately this will lead towards the tearing down of the computer as a monolithic device, and a rethinking of what exactly the network and os's roles are. Queue exo-kernel and DragonFly BSD debates.

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...