Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

New Top500 List Released at Supercomputing '06 217

Guybrush_T writes "Today the 27th Edition of the Top 500 List of World's Fastest Supercomputers was released at ISC 2006. IBM BlueGene/L remains the world fastest computer with 280.6 TFlop/s. No new US system in the top10 this year, since they all come from Europe and Japan. The French Cluster at CEA (French NNSA equivalent) is number 5 with 42.9 TFlop/s. The Earth simulator (no 10) is no longer the largest system in Japan since the GSIC Center built a 38.2 TFlop/s Cluster, reaching the 7th place. The German cluster at Juelich is number 8 with 37.3 TFlop/s. The full list, and the previous 26 lists, are available on the Top500.org site."
This discussion has been archived. No new comments can be posted.

New Top500 List Released at Supercomputing '06

Comments Filter:
  • by harris s newman ( 714436 ) on Wednesday June 28, 2006 @01:22PM (#15622859)
    Its supprising that no microsoft systems are listed....
    • Its supprising that no microsoft systems are listed....

      Well, they only published the requirements for Vista a few weeks ago; I'm sure they'll do better next year.

    • by Anonymous Coward
      I would find that suprising too, except that there are 3 Microsoft systems on the list (5 OSX systems too). I actually read the page though.
      • by Kadin2048 ( 468275 ) <slashdot.kadin@xo[ ]net ['xy.' in gap]> on Wednesday June 28, 2006 @01:46PM (#15623056) Homepage Journal
        Just to follow up, you can get OS information here: http://www.top500.org/stats/27/osfam/ [top500.org] (by family)

        OS (# systems) (Percent)
        Linux 367 73.40%
        Windows 2 0.40%
        Unix 98 19.60%
        BSD 4 0.80%
        Mixed 24 4.80%
        Mac OS 5 1.00%
        Totals 500 100%


        Alternately there's a more refined breakdown listing them by Operating System type and version [top500.org]. Oddly, "Linux" is listed both as an operating system family and as a distinct flavor/distro ... I can only assume that the systems using "Linux" as the particular operating system are using a custom-made distro, instead of one of the commercial ones (which are listed separately on the detailed chart). Unless they just failed to report one in particular.

        As for the Windows-based systems, there were one each for Windows 2003 Server and Windows Compute Cluster Server 2003.
      • Also, what about massively distributed XP "supercomputers" like SETI@home? We just had a slashdot article discussing how the SETI version dominates the other spare-cycle-using background programs. If you summed up all the cycles in use at a given point in time, how would this stack up to the supercomputers on the list?

        I realize internet-linked PCs are a different beast, but given the wide range of architectures on the top 500 supercomputer list, is it such a stretch to consider this a "supercomputer"? An
        • by HermanAB ( 661181 ) on Wednesday June 28, 2006 @01:55PM (#15623132)
          Well, now that you mention it, *nobody* beats MS in distributed botnets...
          • The real question is how much work a a botnet with several thousand nodes is doing. I guess you'd have to measure it in SMS/S (Spam Messages Sent per Second) or DOSC (Denial of Service Capability). And I doubt many countries will want to take ownership of these systems.
        • by awing0 ( 545366 ) <adam&badtech,org> on Wednesday June 28, 2006 @02:04PM (#15623188) Homepage
          With over 900,000 computers in the system, SETI@home has the ability to compute over 250 TFLOPS (as of April 17, 2006).

          http://en.wikipedia.org/wiki/Seti@home [wikipedia.org]

          IBM's Blue Gene is faster (and more flexible). Big networks of computers like SETI's are good at crunching static radio telescope data or brute force RC5 cracking. When it comes to most real world problems, the nodes must communicate and share data, which over the internet makes it far too slow. Real supercomputers do not use any type of networking between nodes, they have a shared memory bus.
          • shared memory bus is a pretty subjective measure. Sine the mid-90's no supercomputers have used true flat-memory shared buses. Instead they are connected by some sort of switched point-to-point network. In many of the mpp machines, these networks are built into the memory controllers on each node. (blue gene, crays, earth simulator, etc) The bandwidth and latency of these networks is orders of magnitude better than gigE, but it's still a network of sorts.
          • by after fallout ( 732762 ) on Wednesday June 28, 2006 @03:25PM (#15623708)
            Real supercomputers do not use any type of networking between nodes, they have a shared memory bus.
            Don't say that, all it does is show how little you know.

            There are 2 common types of interfaces between nodes on a supercomputer: shared memory and message passing.

            Shared memory is where all the nodes can access memory over some sort of network. In order for communication to happen all 2 nodes need to do is read and write to the same location in memory. There is little talk about the network protocol used at this level because for the most part it is an emulation of layer 2 of the OSI (as if all you are doing is ordering the hardware around).

            Message passing would best be described in terms of layer 7. Communication occurs between 2 nodes via messages that are sent back and forth (hence the name). The most common message passing scheme is MPI. In MPI, there is a concept of a sender and a reciever. The reciever calls MPI_Recv and the sender calls MPI_Send and a message is sent from send to recv. You could almost think of this as an HTTP communication; the server is listening, the client sends information, the server sends back, except in MPI the reciever must be calling MPI_Recv and waiting for a send from a specific sender and the sender must call MPI_Send to send the information to the reciever (there really [well, sorta] isn't a concept of a timeout). In my experience, this makes MPI (I use MPICH2) difficult to debug, if A calls send to B and B calls send to A at the same time, your program blows up (often with very little useful information).

            On the cluster I do my work on, the implementation of MPI sends TCP/IP packets over ethernet (much like 256 of this top500 list). The libraries could be written to do the work over Myrinet or any other network.

            For future reference please learn some factual information before you go spouting bull. If you follow this [top500.org] link, and choose interconnect family, you would find that most of the supercomputers in the top500 list are using some standard network interconnect.
  • by Anonymous Coward
    To keep up with the rate that humans make mistakes.
  • Damn... (Score:2, Funny)

    by ReidMaynard ( 161608 )
    Spanky's Cluster'O'Porn just missed the top 500 :-(
  • From 11 to 451... (Score:5, Interesting)

    by engagebot ( 941678 ) on Wednesday June 28, 2006 @01:27PM (#15622901)
    Dang, when our SuperMike was built (Lousiana State University), we were 11th on the list. A quick look now and we're at 451.

    I feel old... ;0)
  • by rritterson ( 588983 ) * on Wednesday June 28, 2006 @01:28PM (#15622907)
    How well does this represent the real top 500?

    If you look at the list, several of the computers/clusters are known simply as "Classified". It makes me wonder if those at the top really represent the top 10 most powerful supercomputers out there. I'm willing to be the US government, for one, has a couple of military use supercomputers up there that they aren't even willing to acknowledge the existance of.

    At the other end of the spectrum, how many smaller clusters aren't on the list simply because the administrator doesn't have time to shut the entire thing down to run a LINPACK benchmark? The cluster I/we use [ucsf.edu] would easily make it into the top 450, and maybe higher, but our research is deemed more important than the glory that comes with being on the list.
    • US government, for one, has a couple of military use supercomputers up there that they aren't even willing to acknowledge the existance of. Just so long as one of them isn't named SKYNET, I'm content ..
    • by flaming-opus ( 8186 ) on Wednesday June 28, 2006 @02:01PM (#15623170)
      I would say it's unlikely that the classified computers are in the top 10, and here's why: the top500 list is constructed using linpack to measure floating point performance of highly parallel computation. Much of the work done by intelligence agencies is data-mining. It's integer tasks that are probably I/O bound rather than cpu-bound. If there were a top500 list of high performance storage systems, I bet the classified systems would own the top of the list, just not for raw fp-compute power.

      Take, as an example, the Cray MTA. It's a product that's not even mentioned in their products page on their website. Yet, if you surf the net very carefully, you'll find out they're building the next version for their single customer of the product-line: the NSA. Even at the maximum configuration, the machine wouldn't make the top500 list, but it has features that make it uniquely suited to a few very peculiar application kernels. (single-virtual-cycle access to any memory within the distributed system)

      Sure the department of defence uses supercomputers to predict the weather, improve weapons systems and simulate, but these are probably not done on systems we don't know exist. That sort of stuff is done at AHPCRC or ERDC, or at Beoing/lochead-martin/Ratheon/etc. All of these sites have huge HPC resources, just not the hugest of the huge.
      • Much of the work done by intelligence agencies is data-mining.

        Yeah, but is the NSA sitting on a supercomputer as a "last resort" for encryption problems? I am sure the NSA definitely prefers backdoors and easy mathematical solutions to cryptography problems, but I would not put it past them to have a few supercomputers in case those methods do not work.
        • Yeah, but is the NSA sitting on a supercomputer as a "last resort" for encryption problems?

          Not supercomputers in the sense they are being discussed here. The top500 list is computers that excel at floating-point operations. I have never seen an encryption method which uses floating point at all. They all use integer operations. DES, RSA, AES, MD5, SHA-1, etc. All 100% integer. In most cases cracking encryption algorithms really boils down to some sort of a search algorithm, so it wouldn't suprise me if ther
    • At the other end of the spectrum, how many smaller clusters aren't on the list simply because the administrator doesn't have time to shut the entire thing down to run a LINPACK benchmark?

      Good point. And what about grid computing? Grids blow away supercomputers for processing power.
    • If your research is too important to risk getting a spot in this list, then you're probably on the right track as far as research goes - you're there to learn things, create formulas to follow the knowledge you seek, and to serve as an example for everyone else. Who honestly gives a fuck about being the most bad-ass supercomputer? Maybe some of the overclock zealots out there, but I'll bet 95-98% f slashdot doesn't give a fuck - they just want their shit to work and perform as expected.
  • Google (Score:5, Interesting)

    by celardore ( 844933 ) on Wednesday June 28, 2006 @01:28PM (#15622913)
    There doesn't seem to be any mention of the GoogleNet. While it may not be used for figuring out sums and what-not, it does have an estimated 126 terraflops of computing power [tnl.net]. I'd say that's notable. I bet at least half those terraflops are devoted to advertising aswell.
    • I'm not sure that the GoogleNet counts as a single "computer." While we can argue semantics about something like BlueGene, it at least can be directed to apply all of its resources to a single problem (whether or not they actually do this in practice, I'm not sure). If the GoogleNet can be used as a supercomputer, then perhaps it should be on the list; but my understanding is that if the system can't be applied to any single arbitrary (properly programmed) task, then it's not enough of a unique entity to ma
    • I was also curious [slashdot.org]. But from reading other posts ... it probably has to do with the rules excluding spread out beasts like Googleplex.
    • Re:Google (Score:2, Insightful)

      by zlogic ( 892404 )
      That's not a supercomputer, that's a really large computing grid. Supercomputers are needed for tasks that require insane amounts of RAM and a really fast system bus. When you have many computers, the data transfer speed between computers is limited by the network bandwidth, and any network is slow compared to your CPU-memory bus. That may be OK for tasks like searching but not acceptable for physics simulation/capture. Sensors connected to a modern collider generate tens of gigabytes of data per second. So
    • Re:Google (Score:3, Insightful)

      by Rhys ( 96510 )
      Google's full computing power is no more a supercomputer than distributed.net or seti@home is. Yes, both supercomputers and seti@home are parallel computation, but they are very different beasts.

      Does google have a supercomputer? Maybe. I'm not actually clear what use a "traditional" supercomputer would be to them. For one thing, "disk" IO is genrally not the forte of a supercomputer, at least compared to processing power.
    • Considering Google's ads are text-based and don't require any encode/decode process to display plain text and HTML links (as opposed to jpeg and animated gif,) that it uses *FAR* less than you estimate. I'l bet more of that is wasted on google video, than any other single google factor, streaming live video must take a shitload of resources esp. when the video has a good chance of being viewed by more than one person at a time, at any given point thru the video.
    • A recent Wired story about their twentieth-something server farm in Oregon (near cheap electricity) has them at about 450K blades. Assuming a mix of old and new commodity disks averaging 200GB per blade, gives close to a 100 petabytes. Plus MicroSoft was blathering about 800K server farms recently which hints at its estimate of a "beat-google" number might be.
  • by ackthpt ( 218170 ) * on Wednesday June 28, 2006 @01:33PM (#15622947) Homepage Journal

    Shit! I can remember when processors had that many transistors!

    hello, olde programmers home, i'm enquiring for a vacancy...

    • by Dadoo ( 899435 )
      I can remember when processors had that many transistors!

      You know, I reall like that metric, especially when you consider each of those processors probably has somewhere in the neighborhood of 100 million transistors.

      Don't feel old, though. I cut my programming teeth on a processor with only 3500 transistors (6502). The transistors were probably so large, you didn't even need a clean room to manufacture it. :-)
    • I can remember when processors had that many transistors!

      And I can remember when computers had far fewer vacuum-tubes in them. And I wonder why they call them fastest supercomputers. Today's calculator has more processing power than last decade's mainframes. Why not just call them today's fastest computers? In 5 years, today's 'supercomputers' will look like jokes.
  • by Anonymous Coward on Wednesday June 28, 2006 @01:34PM (#15622961)
    What about the computer that processes Bill Gates' IRS filing?
  • Rmax vs Nmax (Score:4, Interesting)

    by G3ckoG33k ( 647276 ) on Wednesday June 28, 2006 @01:35PM (#15622970)
    Comparing the Rmax and Nmax values it seems that the list would look quite different if sorted on Nmax instead of Rmax. Can someone explain in plain English the difference, as I didn't understand their explanation. Thanks! :)
    • Re:Rmax vs Nmax (Score:2, Informative)

      by SSCGWLB ( 956147 )
      Rmax and Nmax are two measures of the capability of a supercomputer.

      The first, Rmax, is the LINPACK benchmark. The LINPACK benchmark is a measure of floating point operations per second for the cluster. They usually include theoritical ((Max FLOPs for one CPU) * (number of CPUs) == Rpeak) along with the actual. Obviously, theoretical values will always be larger then actual due to wasted CPU cycles.

      Nmax is the size of the problem (i.e. the dimension of the solved linear equation)

      So, Rmax is the
    • Re:Rmax vs Nmax (Score:5, Informative)

      by Junta ( 36770 ) on Wednesday June 28, 2006 @02:23PM (#15623306)
      Rmax represents the maximum acheived measured FLOPs as a result of an xhpl run.

      Nmax represents the problem size. Nmax generally is aimed to be a problem that consumes as much memory as possible without swapping.

      Rpeak is the theoretical max FLOPs possible according to the processor used. For example, a PPC chip is theoretically capable of 4 Flops per clock, so multiply the clock by the number of cores in the cluster. x86_64 is theoretically capable of 2 flops per clock, so multiply cores by two. Note that AMD clock for clock doesn't do any better than intel in *this* particular benchmark, so Intel clusters inherently can climb this list better, despite poor memory performance and other factors that make them less useful in a general supercomputing sense. Itanium can acheive better floating point (I believe 8 flops per clock).

      And for anyone seeking to compare Rpeak/Rmax numbers with published Cell figures, keep in mind that game consoles (and by extension cell) brag about their single precision (32-bit) floating point performance, whereas this list only deals with double precision numbers (64 bit). Cell actually is nothing special at get top500 relevant benchmark results.

      Many people feel this very specific benchmark is a poor indicator of the overall effectiveness of a cluster, and consider hpcc (which includes hpl as a subset) to be a better holistic method to evaluate the value of a cluster.
      • Thanks, both SSCGWLB and Junta!

        Comparing no 30 and 31 on the list, the Cray XT3 and IBM BlueGene/L Prototype, shows the difference clearly:

        Cray XT3 - 2652 CPU:s Rmax 11810 Nmax 1158660
        IBM BlueGene - 8192 CPU:s, Rmax 11680, Nmax 331775

        My interpretation, the Cray manages to produce a similar Rmax with four times fewer CPU:s thanks to their better bandwidth (as indicated by the higher Nmax). Is that a correct interpretation?
  • US is doing badly (Score:5, Insightful)

    by twfry ( 266215 ) on Wednesday June 28, 2006 @01:36PM (#15622984)
    The editors comment that there are no new 10 top US based computers is an odd comment. The US has 6 out of the top 10. Thats hardly doing poorly.
  • I see water cooling rigs all time take 2ghz CPUs to +4ghz. Why not use this for these machines? Perhaps a motherboard that could be bathed in cooling fluid...
    • Because, maybe just maybe, they want accurate results?
    • Because quite a few of them do use liquid cooling, but they don't order hobbyist solutions. Many Crays still use 3M Fluorinert [wikipedia.org], which doesn't damage sensitive electronics when you spray it right on a processor.
      • Doesn't fluorinert not stand up well to extreme cooling conditions? I seem to remmeber a overclocking article having trouble with it because it froze. Although I think that was using liquid nitrogen to cool it, but still.
    • by Rhys ( 96510 ) on Wednesday June 28, 2006 @03:04PM (#15623559) Homepage
      Given that clusters these days are made from commodity components (Xserve G5s, for instance) and how large clusters are these days, you end up with a pretty astounding failure rate. We lose roughly two peices of hardware (in order of most to least common: memory, cpu, motherboard, disk, power supply) a week, and our cluster (Turing, 640 nodes) is fairly small. We aren't even into the late-in-life crazy-disk-failure mode that most machines get at 3-5 years old. Think about the logistical nightmare if we had to try to "drain" a system of coolant before pulling it out to service it.

      Plus then you'd have to have all that (very custom) cooling equipment, pumps, etc. You'd have to watch for leaks closely, which is also a problem with air cooling and the refrigerant lines, but those have a lot less surface area of pipe/connectors to go wrong: a loop per rack for rack-mounted cooling, not a loop per machine.

      Plus, as other posters have said: we'd like accurate numbers.
  • Googleplex? (Score:3, Interesting)

    by neonprimetime ( 528653 ) on Wednesday June 28, 2006 @01:39PM (#15623013) Homepage
    How does Googleplex [slashdot.org] compare with the #'s in this top 500 list? (# Processors, max, peak, etc.)
    • Googleplex is probably larger in terms of processors but not faster as its function is completely different. It needs to search and index web pages efficiently not crunch through lots of numbers.
  • Blue Gene? (Score:3, Funny)

    by theheff ( 894014 ) on Wednesday June 28, 2006 @01:43PM (#15623033)
    You're telling me that the fastest computer in the world is a pair of pants??
  • by davonshire ( 94424 ) on Wednesday June 28, 2006 @01:46PM (#15623062)
    Just a passing thought looking at this list when you peek at the bottom of the list.
    you see a 2.8Ghz system with 1024 processors or some such.

    Sorry I remember working on repairing a Univac computer when I was in the Navy and how amazing it sounded that Cray had produced this a super computer that could do 800
    million operations a second.

    (Circa 1980 or so)

    You could have one of these computers for I think it was 13 Million dollars.
    And how fabulous that the power supply was actually under the circular bench
    so you could sit on your investment.

    Consider the processing power we have now a days on our desks. A lowly
    3 Ghz P4 Laptop with 2 GB of dynamic ram and 60 GB of Hard drive storage.

    I've yet to see a pair up with our single or dual desktop computers today
    and where they sit back in the super computer days of old. If anyone
    has a link or info I'd love to hear about it.

    Thanks,
      Nestalgia is the romance of historic madness.

    • You could just compute backwards following Moore's Law. Or compute forward: unlike the misinterpretation of "speed doubles", the "density doubles" formulation would lead to things like the cluster I manage (Turing, now #114, originally #66) sitting on your desk as a workstation in about 15 years.

      It wouldn't be very useful for running today's applications, since most are not heavily threaded/parallelized, but that gives you some idea of the speed of change.
  • by saleenS281 ( 859657 ) on Wednesday June 28, 2006 @01:47PM (#15623069) Homepage
    But does it run Windows Cluster Edition? (Bet you didn't see THAT one coming)
    • Yes, actually... at least, one of them does (Windows Cluster Edition 2003, to be exact). Someone posted the OS for the computers somewhere up there in the comments =). Not surprisingly, nearly 75% of them are Linux...

      I do have to admit that I'm surprised that only 2 were Windows, though. DISCLAIMER: I'm not a Windows fan.

      And just so I can say it... IMAGINE A BEOWOLF CLUSTER OF THESE!

  • I wonder (Score:4, Interesting)

    by Itninja ( 937614 ) on Wednesday June 28, 2006 @01:52PM (#15623108) Homepage
    I wonder where my old Packard Bell 486/sx 33 would fall in this list. Which makes me wonder if there's a 'bottom 500' list somewhere. I would love to see a list of the slowest computer still in use.
    • I bet there's more than 500 Commodore 64's still running @ 1.02MHz

      And a lot of even older stuff. But then again there's probably newer stuff with slow processors like coffe machines and what not. (although a modern coffe machine can even have a 66mhz processor and telnet capability...)
  • Even comparing simpler things, like shoes or knives, can not be reduced to a single measurement. Microwave ovens and air-conditioners are already far more complex and come with huge vectors of parameters to compare.

    Can a meaningful comparision be made of computer systems based on just one number? N TFlop/s vs. M TFlop/s? I don't think so...

  • by HermanAB ( 661181 ) on Wednesday June 28, 2006 @01:59PM (#15623153)
    A typical MS Windows botnet will outperform any of these machines on the SOPS (SPAM operations per second) benchmark...
  • by Doc Ruby ( 173196 ) on Wednesday June 28, 2006 @02:16PM (#15623268) Homepage Journal
    Isn't the Slashdot DDoS network the most powerful "computer" in the world?
  • What about... (Score:2, Interesting)

    by Cr0t ( 963724 )
    Hmmm... where is the NSA's Super Computer?
  • by Darth Cider ( 320236 ) on Wednesday June 28, 2006 @02:34PM (#15623373)
    Notice that #21 and #28 use Apple XServes still running with G5 dual processors. The Virginia Tech system, #28, has fallen only 8 places, from #20 last year.

    It's too bad this list doesn't mention cost. When Virginia Tech built its first cluster, the big news was how absurdly inexpensive it was in relation to other systems. It would be interesting to learn if that still holds true.
    • The higher you are in the list, the slower you are to drop. Turing (also Xserve G5s) has gone down from 66 to 114 in a year. It'd be a lesser drop if we'd done a full-cluster run (as opposed to a 4/5ths cluster run), but that'd require big expensive myrinet equipment we don't have.
  • by peter303 ( 12292 ) on Wednesday June 28, 2006 @02:36PM (#15623387)
    The speed doubling time is still about 18 months (== 10x in five years). Two more doublings from the 2005 or 2006 280 TFlops is around 2008-2009. Its a version of Moore's law for supercomputing. Though processor speed hasnt been gaining as fast in recent years, improve clustering technology and software seems to be compensating.

    "Exaflops in 2020!"
    • Oak Ridge National Labratories has been talking about upgrading their unclassified cluster to a petaflop within that timeframe. Right now they're building the first 100 teraflops or so. Some of the specs are ridiculous. At full load, the cluster is planned to take several million dollars in electricity costs. And Oak Ridge gets cheap cheap cheap electricity (they have been in the past to run enrichment, which is energy-intensive)
  • I noticed that the two XServe systems on the list got bumped down.

    Number 21: MACH5 (Apple XServe, 2.0 GHz, Myrinet) at COLSA
    Number 28: System X (1100 Dual 2.3 GHz Apple XServe/Mellanox Infiniband 4X/Cisco GigE) at Virginia Tech

    MACH5 was number 15 back in 2005, and System X was ranked at number 7 back in 2004.

    Someone needs to make a new, huge XServe cluster... but maybe wait until there are Intel XServes.

  • The list was released at the International Super Computing (ISC) conference, not the Super Computing conference (SC). SC'06 doesn't happen until October or November.
  • The SC06 conference is not until November. This was from the International Supercomputing Conference (ISC2006) in Dresden, Germany.
  • by BrianWCarver ( 569070 ) on Wednesday June 28, 2006 @03:30PM (#15623746) Homepage
    Individuals contributing their spare processor cycles via BOINC [berkeley.edu] are currently producing over 380 TeraFLOPS [boincstats.com] putting them clearly in first place (if such distributed systems were counted).

    SETI@Home [berkeley.edu] is now operated exclusively through BOINC and it alone is doing over 167 TeraFLOPS [boincstats.com] right now, putting the SETI@Home network in second place, only behind BlueGene/L (if such distributed systems were counted).

    You can contribute your spare processor cycles too by downloading the BOINC client [berkeley.edu] and attaching to a cool project such as Rosetta@Home [bakerlab.org] which folds proteins as part of an effort to cure human diseases. Join the biggest "supercomputer" today!
  • Supercomputing '06 was awesome. They threw this foam party... and well, 12 people died of electrocution, but it was still a great time. At one point, IBM's Deep Blue took off its chassis... you should've been there.

Dreams are free, but you get soaked on the connect time.

Working...