Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

With Linux Clusters, Seeing Is Believing 208

Roland Piquepaille writes "As the recent release of the last Top500 list reminded us last month, the most powerful computers now are reaching speeds of dozens of teraflops. When these machines run a nuclear simulation or a global climate model for days or weeks, they produce datasets of tens of terabytes. How to visualize, analyze and understand such massive amounts of data? The answer is now obvious: using Linux clusters. In this very long article, "From Seeing to Understanding," Science & Technology Review looks at the technologies used at Lawrence Livermore National Laboratory (LLNL), which will host the IBM's BlueGene/L next year. Visualization will be handled by a 128- or 256-node Linux cluster. Each node contains two processors sharing one graphic card. Meanwhile, the EVEREST built by Oak Ridge National Laboratory (ORNL), has a 35 million pixels screen piloted by a 14-node dual Opteron cluster sending images to 27 projectors. Now that Linux superclusters have almost swallowed the high-end scientific computing market, they're building momentum in the high-end visualization one. The article linked above is 9-page long when printed and contains tons of information. This overview is more focusing on the hardware deployed at these two labs."
This discussion has been archived. No new comments can be posted.

With Linux Clusters, Seeing Is Believing

Comments Filter:
  • by Vvornth ( 828734 ) on Monday December 13, 2004 @12:43PM (#11073002) Homepage
    This is how we nerds measure our penises. ;)
  • by daveschroeder ( 516195 ) * on Monday December 13, 2004 @12:43PM (#11073008)
    Virginia Tech's "System X" cluster cost a total of $6M for the asset alone (i.e., not including buildings, infrastructure, etc.), for performance of 12.25 Tflops.

    By contrast, NCSA's surprise entry in November 2003's list, Tungsten, achieved 9.82 Tflops for $12M asset cost.

    Double the cost, for a Top 100 supercomputer's-worth lower performance.

    And it wasn't because Virginia Tech had "free student labor": it doesn't take $6M in labor to assemble a cluster. Even if we give it an extremely, horrendously liberal $1M for systems integration and installation, System X is still ridiculously cheaper.

    I know there will be a dozen predictable responses to this, deriding System X, Virginia Tech, Apple, Mac OS X, linpack, Top 500, and coming up with one excuse after another. But won't anyone consider the possibility that these Mac OS X clusters are worth something?
    • by Anonymous Coward
      I know this is a stupid question, but what exactly is a Teraflop? The first thing that comes to mind is someone doing a belly flop and hitting concrete...
    • by Spy Hunter ( 317220 ) on Monday December 13, 2004 @01:15PM (#11073293) Journal
      I think you missed something here in your rush to defend Apple. The article is not about building high-teraflop supercomputers; it is about using small-to-medium sized clusters of commodity hardware to run high-end visualization systems (with Linux's help of course). Since they specifically want top-of-the-line graphics cards in these machines, Macs would not be the best choice. PCs have PCI express now (important for nontraditional uses of programmable graphics cards, as these guys are probably doing) and the latest from ATI/NVidia is always out first on PCs, cheaper.
    • by zapp ( 201236 ) on Monday December 13, 2004 @01:15PM (#11073296)
      G5 nodes do have excellent performance, but don't assume OSX is all they can run.

      We at Terra Soft have just released Y-HPC, our version of Yellow Dog Linux, with a full 64-bit development environment, and a bunch of cluster tools built in.

      I'm not much of a marketting drone, but being as I am part of the Y-HPC team, I had to put a shameless plug in. Bottom line is, it kicks OSX's ass any 2 ways you look at it.

      Y-HPC [terrasoftsolutions.com]
      • Truly no offense intended, but...

        I've tried installing YDL on a small G5 cluster. It was a PITA to get running (3 installs before I was able to get the X server running right). And still I can't find any fan control. After 5 minutes the fans spool up to "ludicrous speed" and stick there.

        I really want to like YDL. I've been talking to the folks who do OSCAR about trying to get OSCAR to support YDL. But I'm not sure how it will work out yet, at least until I can figure out how to turn down the fans!

        • by zapp ( 201236 ) on Monday December 13, 2004 @02:05PM (#11073764)
          YDL is not intended to run on a G5 cluster, unless you had Y-HPC. YDL on its own is only 32-bit.

          Fan control was integrated into the kernel over a month ago, and is most definitelly in the first version we released last week.

          We have also developed a nice pretty installer for the head node in a cluster, and wrote Y-Imager (front end for Argonne's System Imager), to automate the building of compute nodes in a cluster.

          No offense taken :)
    • by RazzleFrog ( 537054 ) on Monday December 13, 2004 @01:19PM (#11073333)
      Beside the fact that you are (please forgive me) Apples and Oranges, your sample size is way too small to use as conclusive evidence. Until we start seeing X Serve Clusters in a few more places we can't be sure of the cost benefit.
    • by RealAlaskan ( 576404 ) on Monday December 13, 2004 @01:19PM (#11073334) Homepage Journal
      Virginia Tech's "System X" cluster cost a total of $6M for the asset alone (i.e., not including buildings, infrastructure, etc.), for performance of 12.25 Tflops.

      By contrast, NCSA's surprise entry in November 2003's list, Tungsten, achieved 9.82 Tflops for $12M asset cost.

      When I looked here [uiuc.edu], I found this: ``Tungsten entered production mode in Novermber 2003 and has a peak performance of 15.36 teraflops (15.36 trillion calculations per second).''

      To me, that looks faster than System X, not slower.

      Let's see: NCSA stands for ``National Center for Supercomputing Applications''. ``NCSA [uiuc.edu] is a key partner in the National Science Foundation's TeraGrid project, a $100-million effort to offer researchers remote access ...''

      Looks as if the NCSA has a huge budget. I'd guess that ``gold-plated everything'' and ``leave no dollars unspent'' are basic specs for everythig they buy.

      What can we learn about Virginia Tech? How about this [vt.edu]:

      System X was conceived in February 2003 by a team of Virginia Tech faculty and administrators and represents what can happen when the academic and IT organizations collaborate.

      Working closely with vendor partners, the Terascale Core Team went from drawing board to reality in little more than 90 days! Building renovations, custom racks, and a lot of volunteer labor had to be organized and managed in a very tight timeline.

      In addition to the volunteer labor, I'd guess that Virginia Tech had very different design goals, in which price was a factor. NCSA's bureaucracy probably accounted for a lot of those extra $6M they spent. Different designs and goals probably had a lot to do with the rest of the price, but I suspect that a bureaucratic procurement process was the main cause for the higher price of the Xeon system.

      Yes, System X and the Apple hardware is pretty neat, but don't use the price/performance ratio of these two systems as a metric for the relative worth of Linux and OSX clusters.

      It's unfair and meaningless to compare volunteer labor and academic pricing and scrounging on a limited budget to bureaucratic design, bureaucratic procurement and an unlimited budget.

      • Rpeak, not Rmax (Score:5, Insightful)

        by daveschroeder ( 516195 ) * on Monday December 13, 2004 @01:29PM (#11073427)
        Look here [top500.org].

        The speed you quoted is the theoretical peak, not the actual maximum achieved in a real world calculation (like the Top 500 organization's use of Linpack).

        System X's equivalent theoretical peak is 20.24 TFlops.

        I'm also not indicting Linux clusters in the least; they've clearly shown they can outperform traditionally architected and constructed supercomputers for many tasks, with the benefit of using commodity parts - at commodity pricing. All I'm saying is that there's a new player here, and it's a real contender, and has done a lot for very little money...which was the whole goal of Linux clusters in this realm in the first place.

        (Also, as I said, the volunteer labor model is irrelevant - let's just pretend it was professionally installed for an additional $1M, or even $2M if that would satisfy you. It's still several million dollars cheaper, and 3Tflops greater performance. These are BOTH rackmount clusters with similar amounts of nodes and processors, running a commodity OS with fast interconnects. There are differences, yes, and perhaps even differences in goals. But looking past that, price/performance for something like this is still an important metric.)
        • I wasn't trying to pick on OSX, either. I'm sure it's eminently suited to the purpose. I just don't think that the cost and performance differences come from the OS and hardware choice in this case.

          My point is that volunteer labor is only the beginning of the price difference between the two systems. The big, federally-funded bureaucracy and the departmentally-funded state university project have very different ways of doing things, and I'm only surprised that the cost and performance difference wasn't

      • I'd guess that Virginia Tech had very different design goals

        I also seem to recall that Apple gave VT an *exceptionally* good deal on the hardware -- basically at cost. Any money Apple loses on the deal is a tax writeoff as either an advertising expense or charitable contribution. If you built an identical cluster and had to pay full retail for the boxes, I guarantee you'll spend a WHOLE lot more than VT did.

        The NCSA, on the other hand, is a federal agency and therefore any commodity boxes they buy a

    • You have to take the costs with a grain of salt. They built the original machine for $5.2M. They then upgraded all the nodes from PowerMac G5s to Xserve G5s for $600K. Even if you assume that the $5.2 was a fair price for their original system, the upgrade price was an absolute gift from Apple. The cost per node to upgrade was about $550. Since they moved from non-ECC RAM to ECC RAM (4GB/node), the memory upgrade should have cost more than that alone.

      Vendors will often give away hardware in order to
      • The only special thing they did for VT was *take back* the original G5 towers, and provide 2.3GHz G5 Xserves before they were otherwise available. The $600K upgrade did not reflect any significant discount or gift. A similar cluster could be built by anyone, now, for around that same total price of $6M.
      • You have to take the costs with a grain of salt.

        Don't forget that Mellanox *donated* 24 mts9600 infiniband switches. At $58,000 a piece, you've got $1.4 million worth of equipment for free.
    • Virginia Tech used G5 Tower units. I wonder how much difference there would be in power, heat and space had they used Xserve 1Us? Like what Apple is installing for the Army. (http://www.apple.com/science/profiles/colsa/)
    • by hackstraw ( 262471 ) * on Monday December 13, 2004 @02:51PM (#11074275)
      I know there will be a dozen predictable responses to this, deriding System X, Virginia Tech, Apple, Mac OS X, linpack, Top 500, and coming up with one excuse after another. But won't anyone consider the possibility that these Mac OS X clusters are worth something?

      Your right!

      1st, System X or the "Big Mac" was thrown together so that people like us would talk about it and to get a good standing for the November 2003 top 500 list. They did an excellent job at this.

      Now for some reality. The system is not yet operational.

      When it was first thown together, everyone "in the know" and myself questioned how this was going to work without a reliable memory subsystem, and the VT people responded that they were going to write software to correct any hardware errors, and we said OK, whatever. Then, they said, hmm, we kinda needa a reliable memory subsystem, so lets rip out all 1,100+ machines and start over with these new Xserve boxes that have ECC memory in them.

      This system has not come up yet with the new Xserves, according to their website [vt.edu].

      Now, I'm going to make a comment on Linpack. Linpack, like all good benchmarks are really good at measuring that benchmark's performance. Linpack is a good benchmark, but it is also a benchmark that does not require much RAM per node to run. Some applications do need a good amount of RAM/node to run and being that RAM costs $$, the cost adds up very quickly, and the cost/cpu/teraflop goes down accordingly.

      With the comparison between System X and Tungsten NCSA cluster. Personally, I don't know why the Tungsten cluster cost more because the Mac cluster has more RAM/node and each node should have been cheaper in general. The NCSA cluster uses Myrinet which I know is expensive, but I do not know that in comparison to the Infiniband equipment on the Macs. Supposedly, the Infiniband interconnects were what got System X on the top500 list with such good results, or at least that is what the head of the project told me.

      Although its popular here on slashdot because many of the readers are younger and inexperienced (and have no money) that they praise anything that costs less and extra brownie points go towards an underdog like AMD or Linux, however in the real world people actually will pay extra for something to ensure that it works. Working equipment may seem superfluous to the dorm room Linux guru, but trust me, I know what its like to work with equipment that cost about $1 mil and it doesn't work. We could have gone with the 2nd bidder at $1.2 mil and it would have worked. Yes, we "saved" $200,000, but we also wasted well over $500,000 when one considers that over 50% of the equipment is faulty and many people's time has been wasted.
  • by Anonymous Coward
    How to visualize, analyze and understand such massive amounts of data?

    How to write complete sentences?
  • by BuddieFox ( 771947 ) on Monday December 13, 2004 @12:44PM (#11073010)
    The article linked above is 9-page long when printed and contains tons of information.

    I hope the poster doesn't actually expect any of us to post any meaningful comments based on having read that article, it's a lost cause.. At least on me.
  • by HarveyBirdman ( 627248 ) on Monday December 13, 2004 @12:45PM (#11073025) Journal
    The article linked above is 9-page long when printed and contains tons of information.

    Damn! What kind of paper stock are you printing on?

  • by Anonymous Coward on Monday December 13, 2004 @12:46PM (#11073033)
    Roland Piquepaille and Slashdot: Is there a connection?

    I think most of you are aware of the controversy surrounding regular Slashdot article submitter Roland Piquepaille. For those of you who don't know, please allow me to bring forth all the facts. Roland Piquepaille has an online journal (I refuse to use the word "blog") located at www.primidi.com [primidi.com]. It is titled "Roland Piquepaille's Technology Trends". It consists almost entirely of content, both text and pictures, taken from reputable news websites and online technical journals. He does give credit to the other websites, but it wasn't always so. Only after many complaints were raised by the Slashdot readership did he start giving credit where credit was due. However, this is not what the controversy is about.

    Roland Piquepaille's Technology Trends serves online advertisements through a service called Blogads, located at www.blogads.com. Blogads is not your traditional online advertiser; rather than base payments on click-throughs, Blogads pays a flat fee based on the level of traffic your online journal generates. This way Blogads can guarantee that an advertisement on a particular online journal will reach a particular number of users. So advertisements on high traffic online journals are appropriately more expensive to buy, but the advertisement is guaranteed to be seen by a large amount of people. This, in turn, encourages people like Roland Piquepaille to try their best to increase traffic to their journals in order to increase the going rates for advertisements on their web pages. But advertisers do have some flexibility. Blogads serves two classes of advertisements. The premium ad space that is seen at the top of the web page by all viewers is reserved for "Special Advertisers"; it holds only one advertisement. The secondary ad space is located near the bottom half of the page, so that the user must scroll down the window to see it. This space can contain up to four advertisements and is reserved for regular advertisers, or just "Advertisers". Visit Roland Piquepaille's Technology Trends (www.primidi.com [primidi.com]) to see it for yourself.

    Before we talk about money, let's talk about the service that Roland Piquepaille provides in his journal. He goes out and looks for interesting articles about new and emerging technologies. He provides a very brief overview of the articles, then copies a few choice paragraphs and the occasional picture from each article and puts them up on his web page. Finally, he adds a minimal amount of original content between the copied-and-pasted text in an effort to make the journal entry coherent and appear to add value to the original articles. Nothing more, nothing less.

    Now let's talk about money. Visit http://www.blogads.com/order_html?adstrip_category =tech&politics= [blogads.com] to check the following facts for yourself. As of today, December XX 2004, the going rate for the premium advertisement space on Roland Piquepaille's Technology Trends is $375 for one month. One of the four standard advertisements costs $150 for one month. So, the maximum advertising space brings in $375 x 1 + $150 x 4 = $975 for one month. Obviously not all $975 will go directly to Roland Piquepaille, as Blogads gets a portion of that as a service fee, but he will receive the majority of it. According to the FAQ [blogads.com], Blogads takes 20%. So Roland Piquepaille gets 80% of $975, a maximum of $780 each month. www.primidi.com is hosted by clara.net (look it up at http://www.networksolutions.com/en_US/whois/index. jhtml [networksolutions.com]). Browsing clara.net's hosting solutions, the most expensive hosting service is their Clarahost Advanced (http://ww [clara.net]
    • Finally, he adds a minimal amount of original content between the copied-and-pasted text in an effort to make the journal entry coherent and appear to add value to the original articles.

      Oh, please, you give Roland WAY too much credit. He doesn't add any original content. He just copies and pastes.

    • You, my friend, must be exceptionally bored. Either that, or this Roland guy must have shunned your romantic advances sometime recently. Can't you just stalk in silence like everybody else?
    • Roland Piquepaille's Technology Trends serves online advertisements through a service called Blogads, located at www.blogads.com. [...] Blogads pays a flat fee based on the level of traffic your online journal generates. [...] Visit Roland Piquepaille's Technology Trends (www.primidi.com) to see it for yourself.

      Are you actually Roland Piquepaille? If so, that's a really neat trick to move traffic to that site. If not, then he may be thankful for your comment, after all :-)

    • by chris mazuc ( 8017 ) on Monday December 13, 2004 @01:37PM (#11073502)
      30 accepted stories since August 29th, 2004! [slashdot.org] Wtf is going on here?
    • Given that this phenomenom is widely known (and Roland isn't the only offender), why do the slashdot editors keep accepting stories from this guy? Kickbacks?

    • by ameoba ( 173803 ) on Monday December 13, 2004 @03:27PM (#11074651)
      There's something fundamentally flawed about any business venture in which you rely on Slashdot readers to actually try reading the article...
    • I simply just skip over the links to his blog. It's easy to tell if a post is his (mostly based on the last sentence ... "This summary contains more details ... blah"). And as others have noted, the legality of what he is doing is very questionable. If you don't like it, report his activity to the original author. I do appreciate some of the articles he has brought to light on slashdot (miniature turbines, etc). But, I think the way he tries to pass off the crap on his blog as original content is ridiculous
      • So you feel that Roland should serve you and never gain any benefit from it? Interesting. So you're not at all like an altruist, who believes that he should serve others. You're really a flagrant egotist, who feels that others should serve him. I wonder if you think that you define deontic truth by fiat, or created the universe, as well.
        • Actually yes, Roland shouldn't benefit from it, just like the 1000s of other people who submit stories without any intent to profit, who submit just to share the knowledge with the other interested nerds in the world. Slashdot is a community here to share knowledge, not to have some moron come along and claim someone else's research and hard work as his own.

          Whenever I submit a story, I am serving Roland and not asking for anything in return, as is every other poster, why should he be special?

          I think I hav
  • Wow! (Score:3, Funny)

    by Anonymous Coward on Monday December 13, 2004 @12:46PM (#11073034)
    Supercomputers have become so advanced we need more supercomputers just to understand them.
    • Obligatory (Score:3, Funny)

      by Epistax ( 544591 )
      42
    • At the risk of being called off topic.

      When Harlie was one [amazon.com] was the first book that I recall about a computer that designed another more complex computer that only it could understand.

      Maybe Harlie was a Linux cluster.

    • There are a bunch of different viz techniques listed on http://www.tauceti.org/research.html#v [tauceti.org]here.
    • Wait until supercomuters become so complex that we need supercomputers to design the supercomputers which we need to understand the output of the supercomputer. Problem is, to understand the supercomputer-designing supercomputer's output we need a supercomputer to be designed by a supercomputer ... ok, there's a way out: Let the supercomputer build the supercomputer it designed.
      Ok, now we just need another supercomputer to test the supercomputer the supercomputer built us to interpret the output of the supe
      • Heh. If you've got some time on your hands, read Realtime [kithrup.com] by Daniel Keys Moran (link is to the actual short story, not a "buy this book" page). It may make you swear off imagining Beowulf clusters, though.
    • Does it strike you as odd that these people are putting millions of dollars into the most advanced visualization system currently known to man, and the best they can come up with is essentially a three-monitor spread? I realize that they are using a bunch of projectors to produce a complex image, but shouldn't we be reaching toward something more out of the ordinary, like our sci-fi writers have already visualized [fsnet.co.uk]? When William Gibson [williamgibsonbooks.com] wrote about virtual reality, Jaron Lanier [well.com] and his contemporaries said,
  • A 35 million pixel screen would rock for Half-Life 2. Where can I get me one? Looking at the picture, it's kind of like 3 monitors stuck together, so maybe I'll save some money and only get 1/3rd of the setup. How much can that cost? I mean, really.
    • Re:Big Screen! (Score:3, Informative)

      by dsouth ( 241949 )

      A 35 million pixel screen would rock for Half-Life 2. Where can I get me one? Looking at the picture, it's kind of like 3 monitors stuck together, so maybe I'll save some money and only get 1/3rd of the setup. How much can that cost? I mean, really.

      I know you're joking, but since I'm the hardware architect for the LLNL viz effort, I'll bite anyway. :-)

      Here's what you'll need at minimum:

      • A lot of display devices (monitors, projectors, whatever)
      • Sufficient video cards to drive the above (with new ca
    • A 35 million pixel screen would rock for Half-Life 2. Where can I get me one?

      Well, you could use projectors to get a seamless screen from XP's built-in multi-monitor capability. I believe that the number is 10 screens simultaneous. This provides for a 3x3 matrix and an extra for controlling the damn thing. But you'll probably only get your hands on 1024x768 (786k pixels) so 9 would amount to 7Mpixel.

      You'll probably have to wait on that 35Mpixel screen if you want borderless. Otherwise, go get yourse
  • by weeboo0104 ( 644849 ) on Monday December 13, 2004 @12:52PM (#11073102) Journal
    With Linux Clusters, Seeing Is Believing

    Does this mean that we don't have to just imagine a Beowulf cluster anymore?
  • Finally.... (Score:3, Funny)

    by ElvenMonkey ( 789317 ) on Monday December 13, 2004 @12:53PM (#11073111)
    A machine that can compile a Stage1 Gentoo install in a reasonable amount of time.
    • You would think so (Score:3, Informative)

      by jellomizer ( 103300 ) *
      Unless you change the settings so it is compiling mulible applications at the same time. The speed to install Stage 1 of Gentoo won't be much faster then a 2 maybe 4 CPU system. These super computers and clusters use a concept called Parallel Processing. It is a process where a task is broken up and are handled by many processors in parallel. Most applications are not designed to run in parallel. So unless you have a compiler that is designed with Parallel Processing the OS will give the compiling task to
      • by roxtar ( 795844 )
        I assume you haven't heard of this: distcc [samba.org] It does improve speed during compiling. Gentoo can be compiled using distcc and there is even a how to on the gentoo documentation page.
        • No I havent heard of distcc but my first sentance covered any miscomunication on this topic. Unless you change the settings... Meaning following the normal default methods of installing. The defaults are with gcc.
  • by Anonymous Coward on Monday December 13, 2004 @12:57PM (#11073152)
    So, if I've got this straight, Slashdot drives the banner ad traffic, real journalists write the content, and all Roland has to do is rip off a few articles, then sit in the middle and collect the checks. How do I get a sweet gig like that?
  • 10 years or so from now, you'll have this much power in a little 1" x 1" box (probably priced around $100 dollars, too).
  • by roxtar ( 795844 ) on Monday December 13, 2004 @01:08PM (#11073233) Homepage Journal
    To reaffirm what the article said building linux clusters is very simple. In fact certain distributions such as bccd [uni.edu] and cluster knoppix [bofh.be] specifically for that. Although configuring clustering softwares such as pvm mpi lam mosix etc wouldn't be a problem, I prefer something which has almost everything build into one package thats why I like the above distros. In fact I built a cluster (using BCCD) at home and used it to render images built from povray [povray.org]. I used pvmpov [sourceforge.net] for the rendering on a cluster part. Although there were only four machines the speed difference was evident. And above all making clusters is extremely cool and shows the paradigm shift towards parallel computing.
    • by LithiumX ( 717017 ) on Monday December 13, 2004 @01:20PM (#11073344)
      I do think clusters are going to be a dominant architecture for the next few decades, but I also think the current ultra-heavy emphasis on clusters is as much a function of asymptotic limitations as much as the natural evolution of the technology. It's currently cheaper to build a cluster out of a whole mess of weaker processors than it is to develop a single ubercore. I doubt that situation will last more than a decade, though, going by previous history.

      Computers were initially monolithic machines that effectively had a single core. By the 70's, the processing on many mainframes had branched out so that a single mainframe was often a number of seperate systems integrated into a whole (though nothing on the level we see today). By the 80's it seemed to swing back to monolithic designs (standalone pc's, ubercomputer Crays) and it wasn't until the 90's that dual and quad processing became commonplace (though the technology had existed before).

      Eventually, someone will hit on a revolutionary new technology (sort of like how transistors, IC's, and microprocessors were revoloutionary) that renders current LVSI systems obsolete (optical? quantum?), and the cost/power ratio will shift dramatically, making it more economical to go back to singular (and more expensive) powerful cores rather than cheap (but weaker) distributed cores.
      • But on the other hand problems which require immense amount of calculations will exist and I don't see how advances in VLSI or some other technology will eliminate these kind of problems. So what I actually believe is that to some extent, yes we may go back to singular cores but imagine the power of those single cores together. In my opinion even if new technology does arrive, clusters are here to stay.
        • It all depends on what form an advance takes.

          When VLSI hit the market, it became cheaper to have one ultrapowerful machine, compared to having a cluster of older IC-based hardware. You got more firepower for the money. That's not to say it wouldn't still pay to combine multiple Nth Generation machines, but a great deal of the cost advantage would be lost.

          Clusters exist in their current diversity because it is simply the cheapest and most effective way to create powerful supercomputers. If you have
  • very long article... (Score:4, Interesting)

    by veg_all ( 22581 ) on Monday December 13, 2004 @01:08PM (#11073238)
    So now Monsieur Piquepaille has been shamed by scornful posters [tinyurl.com] into including a link to the actual article (instead of harvesting page views), but he'd still really, really like you to click through to his page....
  • Really... (Score:5, Insightful)

    by grahamsz ( 150076 ) on Monday December 13, 2004 @01:09PM (#11073249) Homepage Journal
    Now that Linux superclusters have almost swallowed the high-end scientific computing market...

    While some simulations parallelize very well to cluster environments, there are still plenty tasks that don't split up like that.

    The reason clusters make up a lot of the Top 500 list is that they are relatively cheap and you can make them faster by adding more nodes - whereas traditional supercomputers need to be deisgned from the ground up.
  • Maybe they are building cool Linux clusters but they can't be that smart. They have their mail addresses just sitting here on the site for spammers to harvest!
    • Maybe they are building cool Linux clusters but they can't be that smart. They have their mail addresses just sitting here on the site for spammers to harvest!

      They are running a secret project about the use of supercomputers to analyze spam. :-)
  • Leave some market share for the big guys.
  • Once you have your visualization cluster, decided on the CPU, the interconnect, the OS, etc., you might ask what kind of application [paraview.org] you can run on it.

  • by saha ( 615847 ) on Monday December 13, 2004 @01:41PM (#11073539)
    Clusters are proven to be cost effective, but they do require more labor to optimize code to get it to work in that environment. Its easier to have the system and the complier do the work for you in a single image system. This article address those issues and concerns. single image shared vs distributed memory in large Linux systems [newsforge.com]
  • I keep forgetting about Roland Piquepaille, and I click on his damn "overview" link.

    Why does /. post these damn things from him? The guy is a shameless shill.

    There should be a highly visible disclaimer on everyone of his posts: "This link goes to an external site that is NOT the article's original site, and this external site is unendorsed by Slashdot. This external site profits from traffic generated by clicking on this link."

    Someone needs to write a Firefox extension that filters any mention of his "ov
    • Yeah, and he even changed his URL. Maybe he was in too many spam blocklists. Does he spam other places too, or just Slashdot?
    • Someone needs to write a Firefox extension that filters any mention of his "overviews". Hmmmm....

      No firefox filter needed. Just add these lines to your HOSTS file ;)

      127.0.0.1 www.blogads.com
      127.0.0.1 blogads.com
      127.0.0.1 images.blogads.com

      Then Roland will stop getting his revenues. Ph33r d4 5145hd07 3ff3c7! MUAHAHAHAHAHA!!!
      • Right...the problem with that is it filters all ads delivered by blogads.com.

        I don't mind ads, especially on blogs that I support. I would want such sites to receive their ad revenue from my visits. I do mind what I believe are "sneaky" methods of getting traffic in order to get more revenue.

        At the very least, Roland's posts should say "My overview of this article is located here." Saying "this overview" instead is misleading...it leads the reader to believe that the original article's authors created so
  • by Sai Babu ( 827212 ) on Monday December 13, 2004 @01:58PM (#11073695) Homepage
    IMO, computer aided visualization is over rated. Sure, it's good in a production environment but the mental effort of visualization is a tremendous aid to imagination. There's no way to computerize epiphany.
  • I saw a demonstration of this a few months ago. It was a 3d simulation of a 10kiloton nuclear bomb going off on a street corner in what looked like a major city. It was a very high resolution rendering on a big widescreen display and it was pretty scary. I've seen a few documentaries on this but to see it in slow motion, in 3D, was just mind boggling. I had a nightmare a few days later in fact, almost wish I hadn't seen that.

    Note: Feds, leave me alone, this was NOT a classified demonstration. Just
  • by LordMyren ( 15499 ) on Monday December 13, 2004 @03:46PM (#11074860) Homepage
    PCI-E has symmetric bandwidth. Current generation graphics cards will undoubtedly not be able to take advantage of this feature, they've spent so long getting data to the graphics card that thats all they're optimized for, but in the long run this has some crucial implications.

    Namely, it allows for graphics cards to operate better in situations exactly like this; clustered applications. As it stands, the graphics card can crunch an enormous amount of data, but is extremely poor at sending it back to the CPU & system. It's optimized for screen dumping only.

    Sony's Cell is going to be absolutely crucial as a tech demo for this foresighted technology. We're heading towards a more distributed computer architecture where various specialized units pipe data between each other.

    In summation,
    Its my hope that eventually graphics cards will catch up and perform better bi-directionally. After that, we've got to wait another 5 years for PCI-E implementations to catch up and perform better switching (vis-a-vise multiple fully-switched x16 busses). We are moving away from the CPU for high performance computing; the cpu currently performs both control and data-processing. Graphics cards are just the first wave of the distributed architecture phenomena, Cell will be a light-year jump towards the future of computing in the intricate levels of hardware reconfigurability. there's a good powerpoint on the patents behind cell here [unc.edu].

    Ultimately this will lead towards the tearing down of the computer as a monolithic device, and a rethinking of what exactly the network and os's roles are. Queue exo-kernel and DragonFly BSD debates.
    • follup question:

      many of these emerging distributed technologies rely upon increased switching capabilities. ps3 has some astronomical amount of internal bandwidth**. if cpu's actually are getting significantly harder to make faster, is there any correlation to the difficulty in making cheaper faster switching? i'm a computer engineer, i know a reasonable amount about the difficulties in scaling cpu performance. but from a fabrication standpoint, i'm really not familiar with the challenges of enhanced
  • by totallygeek ( 263191 ) <sellis@totallygeek.com> on Monday December 13, 2004 @05:32PM (#11075961) Homepage
    Why does the scientific community keep using Linux? Everyone knows now that Microsoft [googlesyndication.com] has a lower TCO and is better at everything [slashdot.org].

  • I have a redergarden (not quite a renderfarm ;) and I've used POV-Ray to make visualizations and animations of my supercell model data. See the Novermber 2004 Linux Journal (cover plus article) for what I did. What I did was get POV-Ray, which, note, is "free" (with restrictions especially on the latest version) and got it to recognize my model data format natively (using the source of course). Then I can fire up my 14 of my nodes, all NFS mounted to a terabyte RAID array, with a python script (using pyMPI)
  • Now i suddenly understand who bought the last available NVidia 6800U GPU's !

    On 07 October 2004:
    "Nvidia 6800 Ultra as rare as hens' Doc Marten boots"
    http://www.theinquirer.net/?article=18932 [theinquirer.net]

    and on 05 December 2004 : still no 6800 Ultra available!! :
    "6800 Ultra hardly available in EU $740 for the card that you can't buy"
    http://www.theinquirer.net/?article=20055 [theinquirer.net]

    Robert

What is research but a blind date with knowledge? -- Will Harvey

Working...