With Linux Clusters, Seeing Is Believing 208
Roland Piquepaille writes "As the recent release of the last Top500 list reminded us last month, the most powerful computers now are reaching speeds of dozens of teraflops. When these machines run a nuclear simulation or a global climate model for days or weeks, they produce datasets of tens of terabytes. How to visualize, analyze and understand such massive amounts of data? The answer is now obvious: using Linux clusters. In this very long article, "From Seeing to Understanding," Science & Technology Review looks at the technologies used at Lawrence Livermore National Laboratory (LLNL), which will host the IBM's BlueGene/L next year. Visualization will be handled by a 128- or 256-node Linux cluster. Each node contains two processors sharing one graphic card. Meanwhile, the EVEREST built by Oak Ridge National Laboratory (ORNL), has a 35 million pixels screen piloted by a 14-node dual Opteron cluster sending images to 27 projectors. Now that Linux superclusters have almost swallowed the high-end scientific computing market, they're building momentum in the high-end visualization one. The article linked above is 9-page long when printed and contains tons of information. This overview is more focusing on the hardware deployed at these two labs."
Mac OS X has similar benefits (Score:4, Interesting)
By contrast, NCSA's surprise entry in November 2003's list, Tungsten, achieved 9.82 Tflops for $12M asset cost.
Double the cost, for a Top 100 supercomputer's-worth lower performance.
And it wasn't because Virginia Tech had "free student labor": it doesn't take $6M in labor to assemble a cluster. Even if we give it an extremely, horrendously liberal $1M for systems integration and installation, System X is still ridiculously cheaper.
I know there will be a dozen predictable responses to this, deriding System X, Virginia Tech, Apple, Mac OS X, linpack, Top 500, and coming up with one excuse after another. But won't anyone consider the possibility that these Mac OS X clusters are worth something?
Roland Piquepaille and Slashdot (Score:5, Interesting)
I think most of you are aware of the controversy surrounding regular Slashdot article submitter Roland Piquepaille. For those of you who don't know, please allow me to bring forth all the facts. Roland Piquepaille has an online journal (I refuse to use the word "blog") located at www.primidi.com [primidi.com]. It is titled "Roland Piquepaille's Technology Trends". It consists almost entirely of content, both text and pictures, taken from reputable news websites and online technical journals. He does give credit to the other websites, but it wasn't always so. Only after many complaints were raised by the Slashdot readership did he start giving credit where credit was due. However, this is not what the controversy is about.
Roland Piquepaille's Technology Trends serves online advertisements through a service called Blogads, located at www.blogads.com. Blogads is not your traditional online advertiser; rather than base payments on click-throughs, Blogads pays a flat fee based on the level of traffic your online journal generates. This way Blogads can guarantee that an advertisement on a particular online journal will reach a particular number of users. So advertisements on high traffic online journals are appropriately more expensive to buy, but the advertisement is guaranteed to be seen by a large amount of people. This, in turn, encourages people like Roland Piquepaille to try their best to increase traffic to their journals in order to increase the going rates for advertisements on their web pages. But advertisers do have some flexibility. Blogads serves two classes of advertisements. The premium ad space that is seen at the top of the web page by all viewers is reserved for "Special Advertisers"; it holds only one advertisement. The secondary ad space is located near the bottom half of the page, so that the user must scroll down the window to see it. This space can contain up to four advertisements and is reserved for regular advertisers, or just "Advertisers". Visit Roland Piquepaille's Technology Trends (www.primidi.com [primidi.com]) to see it for yourself.
Before we talk about money, let's talk about the service that Roland Piquepaille provides in his journal. He goes out and looks for interesting articles about new and emerging technologies. He provides a very brief overview of the articles, then copies a few choice paragraphs and the occasional picture from each article and puts them up on his web page. Finally, he adds a minimal amount of original content between the copied-and-pasted text in an effort to make the journal entry coherent and appear to add value to the original articles. Nothing more, nothing less.
Now let's talk about money. Visit http://www.blogads.com/order_html?adstrip_category =tech&politics= [blogads.com] to check the following facts for yourself. As of today, December XX 2004, the going rate for the premium advertisement space on Roland Piquepaille's Technology Trends is $375 for one month. One of the four standard advertisements costs $150 for one month. So, the maximum advertising space brings in $375 x 1 + $150 x 4 = $975 for one month. Obviously not all $975 will go directly to Roland Piquepaille, as Blogads gets a portion of that as a service fee, but he will receive the majority of it. According to the FAQ [blogads.com], Blogads takes 20%. So Roland Piquepaille gets 80% of $975, a maximum of $780 each month. www.primidi.com is hosted by clara.net (look it up at http://www.networksolutions.com/en_US/whois/index. jhtml [networksolutions.com]). Browsing clara.net's hosting solutions, the most expensive hosting service is their Clarahost Advanced (http://ww [clara.net]
Building clusters with linux is easy. (Score:4, Interesting)
very long article... (Score:4, Interesting)
Re:Building clusters with linux is easy. (Score:4, Interesting)
Computers were initially monolithic machines that effectively had a single core. By the 70's, the processing on many mainframes had branched out so that a single mainframe was often a number of seperate systems integrated into a whole (though nothing on the level we see today). By the 80's it seemed to swing back to monolithic designs (standalone pc's, ubercomputer Crays) and it wasn't until the 90's that dual and quad processing became commonplace (though the technology had existed before).
Eventually, someone will hit on a revolutionary new technology (sort of like how transistors, IC's, and microprocessors were revoloutionary) that renders current LVSI systems obsolete (optical? quantum?), and the cost/power ratio will shift dramatically, making it more economical to go back to singular (and more expensive) powerful cores rather than cheap (but weaker) distributed cores.
30 accepted stories since August 29th, 2004! (Score:5, Interesting)
Yeah, VT really didn't do anything... (Score:4, Interesting)
Graphic Card Technology (Score:3, Interesting)
Namely, it allows for graphics cards to operate better in situations exactly like this; clustered applications. As it stands, the graphics card can crunch an enormous amount of data, but is extremely poor at sending it back to the CPU & system. It's optimized for screen dumping only.
Sony's Cell is going to be absolutely crucial as a tech demo for this foresighted technology. We're heading towards a more distributed computer architecture where various specialized units pipe data between each other.
In summation,
Its my hope that eventually graphics cards will catch up and perform better bi-directionally. After that, we've got to wait another 5 years for PCI-E implementations to catch up and perform better switching (vis-a-vise multiple fully-switched x16 busses). We are moving away from the CPU for high performance computing; the cpu currently performs both control and data-processing. Graphics cards are just the first wave of the distributed architecture phenomena, Cell will be a light-year jump towards the future of computing in the intricate levels of hardware reconfigurability. there's a good powerpoint on the patents behind cell here [unc.edu].
Ultimately this will lead towards the tearing down of the computer as a monolithic device, and a rethinking of what exactly the network and os's roles are. Queue exo-kernel and DragonFly BSD debates.