Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
IBM

IBM Takes #1 w/ASCI White 175

mcryptic writes "Cnet News has this story about how IBM now tops the top 500 list with the new ASCI White supercomputer. The machine has 8,192 CPUs, weighs 106 tons and takes up two basketball courts' worth of floor space." And it's for Seti@home...er...no.
This discussion has been archived. No new comments can be posted.

IBM Takes #1 w/ASCI White

Comments Filter:
  • Well, considering the fact that a beowulf cluster of these would require about 32 basketball courts of space, I don't really....

    Aww screw it, who am I kidding? Aye.



    54% Slashdot Pure
  • > I move that the following comments be banned from this board:
    1.imagine a beowulf cluster of these.
    ...
    All in favor say "aye"


    If we all joined in in saying "aye", wouldn't that be a Beowulf cluster of ayes?
  • asci white IS PEOPLE! IT'S PEOPLE! aaarghhh!
  • B-451 used to be the unclassified National Energy Research Computer Center. It has been the home to a lot of neat technology over the years. It started out with a CDC-7600 and a CDC-6400, then Cray 1 Serial # 6 was added. That machine had 500K 64bit words of memory and 16 CDC DD-14 disk drives ( 300 MB each ! ). Cray 1 Serial # 33 was added ( 1 Million words of memory ! ) and later an XMP which was a multiprocessor machine. The center was the home to Serial # 1 Cray 2, nicknamed bubbles. That machine was delivered on a Monday morning and was able to run the Livermore written operating system ( CTSS Cray TimeSharing System ) for a few minutes on Friday - till it crashed. The machine was in full production within a few weeks of delivery, quite an achievement for any first off supercomputer. The NERSC Engineering crew had built a hardware simulator of the cray 2 and given the OS / Compiler / Library guys months of time to work on debugging the system and supporting codes. The operating system was written in a variant of Fortran, and occupied about 2 - 3 percent of the CPU resources. NERSC was built as a Triad - A group of supercomputers, a large hierarchical file storage system tightly coupled to those sysems and a high speed ( for the day ) network connecting a nationwide user base to those resources. NERSC ( actually its predicessor, National Magnetic Fusion Energy Computer Center NMFECC ) was believed to be the first ever attempt to provide supercomputer access to a geographically distributed user comunitiy. Later machines added to NERSC included Cray YMP-C90 and the Cray T-3D, a particularly interesting "bridge" machine which provided users with both the then standard Cray vector processor architecture as well as a torus connected array of 128 Alpha processor nodes. This gave a user the ability to start with a basically unmodified code which had run on a Cray computer and then analyize the section of code which took the most time in the run. The programmer could then work at converting that section to run in the torodial array, hopefully enhancing performance. The machine enabled programmers to learn the new parallel architecture with baby steps. NERSC got moved to Lawrence Berkeley National Lab in 1995-1996, they are still on-line at http://www.nersc.gov [nersc.gov] .

    Zoot

  • Here are some interesting figures:
    rank, year, Rmax/proc, #proc, manufacturer
    1&nbsp 2000 0.602 8192 IBM
    2&nbsp 1999 0.247 9632 Intel
    3&nbsp 1999 0.369 5808 IBM
    4&nbsp 1998 0.261 6144 SGI
    5&nbsp 2000 1.061 1336 IBM
    6&nbsp 2000 1.068 1104 IBM
    7&nbsp 2000 9.241 112&nbsp Hitachi
    8&nbsp 2000 0.806 1152 IBM
    9&nbsp 2000 9.170 100&nbsp Hitachi
    10 1998 0.823 1084 Cray Inc.

    This is kind of interesting. We can see how well these systems scale at the high end, particularly looking at numbers 1, 5 & 6, which all use similar IBM processors. We can also guess that, while faster, the individual processors in the Hitachi machines are less than the 9-15 times as fast as the IBM processors, as the raw numbers suggest. I'm a bit surprised at this; I expected it to level off after a few hundred, particularly since the number of processors in these problems is probably matched reasonably well with the parallelism inherent in the program being run.

    We can also see the design philosophy the different companies use. These machines aren't using super high clock gigahertz processors; they're fast, but not clock-war casualty fast. I wonder if this will change as manufacturers ramp their higher end processors to greater speeds to flex their muscle over the hyperactive x86. It may not; dissipating heat from a thousand processors must be a big enough challenge already. We can also see that the Intel processors are performing much less compared to IBM, SGI, and Cray. The Hitachi numbers are pretty amazing; they make it on the top ten with less than 1/82 the number of processors as the big IBM rig in first place. Year seems to make a big difference here as well; newer machines have faster processors. Although this means more potential bottlenecks as other parts improve at a slower rate, it also means we can get the same performance with less parallelism, which reduces the bottlenecks. At the high end we're sticking more processors in parallel, but the midrange has a lot of machines with a lot fewer processors than comparable machines from just one year ago.

    For the top machines, it's all custom hardware, but we can see that even in the top 40 we're getting to a few standard supercomputer models, and by the time we get to 200 or so, we're seeing many groups of 10 or more identical machines. Not high volume, to be sure, but you can bet this is a lot more economical than the unique machines in the top positions.

    From the years, we see just how many of these top level supercomputers are made each year. There are still two 1998 and two 1999 models on the list, including three of the top four, but six of the top ten are from 2000. The top machine from 1997 is back in 14th place, and from 1996 in 51st. I wonder if other machines were in there but have been dismantled. Both of those two are Crays, whereas now most of the very top spots belong to IBM.

  • Livermore just got the OK to put out an RFP for a 70 Teraflop machine for delivery sometime around 2004. LANL is getting a 30 Teraflop machine in about 2 years which will be built by Compaq.

    RFP?
    Request For Protests?
  • LLNL is Los Alamos' sister lab - they do very similar work, but aren't quite as well known.

    ASCI White will be "behind the fence", and thus used mostly for classified work. "Stockpile Stewardship" is the official language. Making sure weapons are designed to be "one point safe" is an example (ie - it won't go nuclear if someone unloads a machine gun or a shape charge into the pit).

    Livermore just got the OK to put out an RFP for a 70 Teraflop machine for delivery sometime around 2004. LANL is getting a 30 Teraflop machine in about 2 years which will be built by Compaq. (ASCI White is 12 Teraflops).

  • YEAH! Go Big BLUE!

    Err...Umm...WHITE!

    Yeah! Go BIG WHITE!

  • It's not just shooting up mice and Mad Cow Disease- all those biotech people are clamoring for more computing power. But now that they've sequenced the whole human genome, does our pal J. Craig at Celera [celera.com] really know what to do with more computing power? I'd give dollars to donuts that he'd waste it on a UT server, while people like Stephen Mayo [caltech.edu] and his research group at CalTech are drooling over power like this.

    Hot biotech now isn't about sequencing the genome, it's trying to decide what to do with the sequence now that there is a blueprint to work from. Thus companies like Incyte Genomics [incyte.com] and Sangamo Biosciences [sangamo.com] are making money selling tools to build on or manipulate the structure we already have.

    A machine running code that will reliably predict the actual folded tertiary structure of the unique protein that derives from any known sequence of DNA is the holy grail of biotech today. Maybe this IBM box (or should I call it a house?) is a step in that direction.
  • Wouldn't fractional distilation work? Run all the
    bits and pieces through a grinder, heat to just under 2856degC, then to just over and condense
    the gas?
  • Does every fucking moron have to bring up beowulf every 10 seconds? I bet 99% of them have never even tried beowulf

    I read Beowulf years ago, and a damn good story it was too!. To quote:-

    "Beowulf , written in Old English sometime before the tenth century A.D., describes the adventures of a great Scandinavian warrior of the sixth century.

    A rich fabric of fact and fancy, Beowulf is the oldest surviving epic in British literature.

    Beowulf exists in only one manuscript. This copy survived both the wholesale destruction of religious artifacts during the dissolution of the monasteries by Henry VIII and a disastrous fire which destroyed the library of Sir Robert Bruce Cotton (1571-1631)."

    Ooh you mean THE Beowulf that "is a multi computer architecture which can be used for parallel computations.

    It is a system which usually consists of one server node, and one or more client nodes connected together via Ethernet or some other fast network.

    It is a system built using commodity hardware components, like any office desktop PC with standard Ethernet adapters, and switches. It does not contain any custom hardware components and is trivially reproducible. "

    1. Pah! My desktop 1GHz box is fast. {be rude about RISC systems on the grounds that MHz is all}
    2. But does it run Quack?{Be rude because you have no understanding of systems that don't even *care* about video cards}
    3. Imagine a Beowulf of these... {Umm - This is a *real* cluster. And I am Grendel's mother. Be afraid.}
    4. Who needs more computing power anyway? {Physicists, Engineers, ... REAL scientists. But don't ask the Biologists. They need milions of computers like this *each* before computers are of any interest to them. But that wouldn't bother you, would it? [note - protein folding is the hardest computational task ever found]}
  • >All in favor say "aye"

    ...or "ACK"
  • Well.... Since Linux happens to be IBM's current hard-on, and they want it on EVERYTHING, who's gonna add support for 8,000+ processors in the Linux kernel. Might be in 2.4, if it ever gets released! (100 2.3 versions and so far 10 2.4pre's, and it ain't ready????) That said, who'd want to run Internet Explorer, uh, I mean Windows on it, even if it did support that many processors? MS has a reputation for slowing even the fastest machines to a crawl with Internet Exploder [pla-netx.com]. .... /* * Microsoft Confidential * Copyright (C) 1975-2004 Microsoft Corp. * * asci_wh.c - adds support for IBM's ASCI White computer. * Also causes BSOD's when ASCI White runs out of * power when trying to run IE. */ .... (I just had to do that. Thanks to MS-DOS 6 source, I got most of the commenting right....)
  • while i've been waiting for ASCI white to take #1 for a while, I still cannot wait for BLUE GENE [zdnet.com]

    1,000,000,000,000,000 ops per second... that's alota freakin' ops...

  • by daeley ( 126313 ) on Friday November 03, 2000 @02:38PM (#650355) Homepage
    After booting up for the first time, the ASCI White immediately declared that it had to think about the Ultimate Question to Life, the Universe, and Everything and immediately shut itself off.

    In related news, a few dozen large yellowish alien spaceships began hovering over the world's major cities today, floating in the air precisely like bricks don't.
  • make a mighty nice Half Life server...I reckon'
  • It might be more accurate to use the term PE (processing element) instead of CPU, but it's certainly acceptable. Think of it this way: each CPU is central to its own node. (Unless, of course, there are multiple processors per node...) The CNet article doesn't go into much depth about the supercomputer's actual architecture.
    ***
  • by bugspawn ( 40599 ) on Friday November 03, 2000 @02:40PM (#650358)
    Some more info here [llnl.gov] for those interested in more detail.

    This machine is going in at Livermore - but Los Alamos has already contracted for a larger machine (currently called the "Q" machine) which will be designed by Compaq - installed in 2002 (I think).

  • No kidding! You think they could spare a few processors to help us out...
  • This isn't a single system image. There are many separate copies of AIX, each controlling a small number of CPUs. Programs communicate between these systems using a technique called "Message Passing".
  • by account_deleted ( 4530225 ) on Friday November 03, 2000 @02:43PM (#650361)
    Comment removed based on user account deletion
  • Damn, that .sig was good.
  • My wife manages to find all my character faults without much apparent effort.... Perhaps the computer is female....

    Actually, the 'female' brain is vastly more interconnected than the 'male' brain. A female brain is an amazing feat of parallel processing. This is why,

    • A woman can take just one step into a shop, and instantly know there's nothing in it that she wants
    • A woman gan go shopping for "nothing in particular" for hours, just taking in the delight of the variety of clothes, people, places...
    • A man can never win an argument with a woman, because while the man is trying to argue the point, the woman makes the argument about everything...

    But I'm not being sexist... because some men have more female patterned brains, and some women more male-patterned brains... so we're talking generally about average men and women.

    So as a general rule to relationships, always remember that from the male point of view, a woman is crazy, and from the female point of view, a male is stupid.... so expect the female to act crazy -- this is just her powerful brain processing greater complexity than the poor male brain can understand. But the male contributes also, by way of his stupid brain, the ability to stay focussed on one thing.

    See Brain Sex [amazon.com].

  • From another cnet story [cnet.com] on the asci white...

    The Nighthawk 2 will be the first commercially released computer to use IBM new Power3-III processor

    power three three processor?
    sounds like the win2k startup screen

    built on n(ew)t(echnology) Technology

    hmm..
  • If you're like me you shouted "GIF!" when reading this article, so to slake your thirst for supercomputing pr0n:

    http://www.llnl.gov/asci-scrapbook/ [llnl.gov] ;

    njoy.
  • Kasparov has already been beaten/A& gt; [slashdot.org]
  • ..it's military.

    What's the largest computer out there which isn't designed to supporting more efficient killing?

  • ... to create the world's largest, heaviest computer.

    I thought they were already doing just fine with the ThinkPad....


    GM

  • ChangeLog: 3 Nov 2000 @ 20:37 EST: Had to fix the formatting. Well....
    Since Linux happens to be IBM's current hard-on, and they want it on EVERYTHING, who's gonna add support for 8,000+ processors in the Linux kernel. Might be in 2.4, if it ever gets released! (100 2.3 versions and so far 10 2.4pre's, and it ain't ready????)

    That said, who'd want to run Internet Explorer, uh, I mean Windows on it, even if it did support that many processors? MS has a reputation for slowing even the fastest machines to a crawl with Internet Exploder [pla-netx.com].

    ....
    /* * Microsoft Confidential * Copyright (C) 1975-2004 Microsoft Corp. * * asci_wh.c - adds support for IBM's ASCI White computer. * Also causes BSOD's when ASCI White runs out of * power when trying to run IE. */ ....
    (I just had to do that. Thanks to MS-DOS 6 source, I got most of the commenting right....)

  • There are no character faults. Only not enough girls with the same character. :)
  • Ha! I'd like to see someone sneak one of THESE puppies out of los alamos. Or have it wind up 2 weeks later behind a copying machine.
  • Amen! I couldn't have said it better myself.

    I'm sorry, but making those comparisons is about as useful as saying a Greyhound Bus outperforms a Ferrari because it can get 50 people around a racetrack faster. They're different types of machines for drastically different types of problems.

    People will argue (myself included) that LINPACK is a useless benchmark for the Top500 - but distributed.net would probably run LINPACK ~100 times slower than a single CPU.

  • Uh huh. Moreover, these distributed systems have _awful_ latency.
  • imagine the dustbunnies this thing would produce.... they should have guards on duty.... the "bunny" from Search for the Holy Grail comes to mind

    "it has big sharp pointy teeth!" -- tim the enchanter

    cheers,
    ecc
  • Comment removed based on user account deletion
  • Ok, so 8000+ processors is alot but still... two basketball courts worth of floor space? That sounds awful lot, and processors are rather small. So what takes up all this floor space? Eight thousand fans?
  • by sulli ( 195030 ) on Friday November 03, 2000 @02:48PM (#650377) Journal
    will it defeat Kramnik? [slashdot.org]
  • Of course it's ASCI and not ASCII. There is such a thing as a joke, however. :)



    54% Slashdot Pure
  • I'm already looking forward to buying the ultrasmall watch version in say 20 years from now.
  • It's 512 nodes, with 16 processors each.
    Windows 2000 Datacenter Server supports 32 processors per node.
  • Here's the comparison according to SETI@Home's FAQ:

    How does the computing power of Seti@home compare with existing supercomputers?
    The most powerful computer, IBM's ASCI White, is rated at 12 TeraFLOPS and costs $110 million. SETI@home currently gets about 15 TeraFLOPs and has cost $500K so far.

  • You have a few problems with your argument here. Firstly, you are forgetting that at MC (Manchester Computing) they already have some machines on this list like the T3E1200, and so you might find that the combined raw processing power might be a little more than you expected.

    In addition, you have the problem that these machines scattered around several campuses (I've lost count of how many campuses MMU have alone), have real lives outside of number-crunching - students are using them for word processing, programming, etc.

    In addition, to say that throwing money at a problem is how you solve it, suggests that you are the most un-employable project manager of any description I could imagine, and I'm kind of worried about how with an attitude like that you managed to get by on slashdot. Intelligence is what needs to be thrown at it, and as hardware costs money, money is kind of neat too. However, you can't just throw a load of x86 chips into a box and say "There yah go! 12,000 processors and I'm sure we could get at least 15,000 GLOPS out of that baby!" which is what you are suggesting. Think about it.

    And finally, seeing as UMIST and MC aren't talking to each other a great deal anymore, I would suggest that it is going to be practically impossible to actually get all the Universities in Manchester to do this without a lot of politics getting involved. :-)
  • by Jah-Wren Ryel ( 80510 ) on Friday November 03, 2000 @04:40PM (#650383)
    I work on large multi-processor machines. Or more precisely, I work with clients developing software for such machines. I have found that regardless of the precise definition, using the term CPU reduces confusion quite a bit. Why? Because the alternative of saying "processor" is very close to the word "process."

    When you are in deep discussion talking about which processes will be scheduled on which processors, it is easy for people to get really lost really quick.

    So, for ease of verbal (and probably written) communication I find that the term CPU is a lot more clear than processor.
  • no doubt blowing a volume of air equivalent to the interior space of the Birmingham NEC[*] past it each second.

    [*] American readers should think Yankee Stadium here.

  • by SIGFPE ( 97527 ) on Friday November 03, 2000 @04:43PM (#650385) Homepage
    And if you put all of the wires in this beast end to end you would have a wire long enough to connect the moon to the earth three times over. And if you tried to store the amount if data it can process in one day on floppy disks you would need a pile of floppies 2.85 miles high. But that's not all. If you took all of the air that goes through the fans to cool it and pumped it to a submarine you could keep 18 active sailors. And of course 85% of statistics are made up.
    --
  • isn't this _already_ a cluster? bog
  • 16 in fact. One of the problems with programming for such a beast is that you have to use both types of parallel programming techinques techniques at once. Distributed memory, like MPI, to communicate between nodes, as well as shared memory, like Pthreads or pvm, within the node.

    Del
  • "Of course that half-pound machine is going to have a hell of a surface temperature! "

    Yup....It well take a cooling umit the size of
    2 basketball courts to cool it...
  • Made up, sure. . , but thanks. You made me think and as a result, I have now transformed into "Egomaniacal Fact Correcter Lad."

    The moon is WAY far away. Like a quarter million miles or something. I doubt there's enough wire in ASCI White to get even a sliver of the way to the moon.

    I bet that monster could process way more than 2.85 miles of floppies worth of info in a day. That's not so much, when you think about it.

    As for the sailors. . . Who knows? I say a live test is in order. Send in the marines!

    -Egomaniacle Fact Correcter Lad

  • thanks for clearing that up i was thinking why are they using asc1 when everyone else is on asc2?
  • Politics. We have a test ban *now*, and lots of weapons waaay beyond their spec'd design lifetimes. Do you want to not know whether those weapons work till 2012? Imagine that you have a 1959 DeSoto (for those unfamiliar, one of those ridiculous heavily chromed American cars with huge fins and just generally totally out of date). It has been sitting in your garage along with 1000 other 1959 DeSotos. Your job is to make sure that when the time comes, you can start any one of them up, floor the gas pedal, and have it crash into the wall in front of it. Every once in a while, you start one up and it runs fine.

    However, you have two problems. First, the DeSoto is getting old and wasn't meant to last this long. The companies that used to manufacture the parts aren't around anymore. The people who designed the thing in the first place are retiring. Second, someone comes along and says that you have to be absolutely sure the DeSoto will start up, but you're not allowed to actually start them up to check. What do you do?

    If you've got a few billion dollars, you decide that the only realistic alternative is to push the state of the art in computer simulation ahead a few years. Thus you get ASCI and the move to "beat" Moore's Law.

    Del
  • I would have thought that Eli Wallach would be MS-DOS (stupid but gets the job done) whereas Lee Van Cleef would have to be Windows NT (Devious, backstabbing son of a whore).

  • ....and it was actually post #42! kick ass!
  • by Claudius ( 32768 ) on Friday November 03, 2000 @02:51PM (#650394)
    1% - Insightful commentary, such as a discussion of whether big, centralized systems are still relevant today, or whether the rankings in the top 500 list are based on the most appropriate criteria.

    You must have rounded up. :)

    For ASCI I think it is relevant to have big centralized machines such as these. They have been/are being built primarily for modelling nuclear weapons to address performance issues that would otherwise be impossible to resolve short of making craters at the NTS in Nevada. For security purposes alone it's better to have one big machine located behind a fence with armed guards than a bunch of machines scattered about a facility.

    Of course the performance of simulation codes on machines as massively parallel as these is generally pretty poor. As a rule of thumb, most parallel radiation-hydrodynamics codes are at best using only 5% or so of the clock cycles, spending the bulk of their time waiting on message passing bottlenecks. While progress has been made in optimizing linear solvers on massively parallel machines, it is still a far cry from banishing the question of whether we getting our money's worth out of the multi-billion dollar ASCI project.
  • # ping -f localhost



    ------------------------
  • With all that processing power the MPAA and RIAA might be able to calculate a clue. But they've already bought legislation to outlaw that, didn't they?

    Crap.

    Your .sig. My $.02. I win.
  • hehe, I'd immagine it would be an ideal birthplace for DustPuppy from www.userfriendly.org
  • by Lish ( 95509 )
    Processors perhaps would be a better term. Just eliminating the "central" doesn't really work well tho...you just try to market something full of "PU"'s!

  • However, taking into account Ahmdahl coefficients (how efficiently a multi-processor or multi-computer parallelises for a particular problem), and the fact that inter-computer connections would be both slow _and_ very high latency....

    The applications that are deployed on these types of systems are "embarassingly parallel", i.e. specifically designed to be largely independent of inter-process synchronization, which makes them immune to Amdahl's law. This is what allows these types of supercomputers to be implemented by lashing hundreds or thousands of processors together. In theory, the number of processors is limited only by a) cost, and b) management tools that allow the systems to be operated by rational means.
  • by Anonymous Coward
    (The rabbit of Caerbannog [mwscomp.com]): And the Lord spake, saying, 'First shalt thou take out the Holy Pin. Then, shalt thou count to three. No more. No less. Three shalt be the number thou shalt count, and the number of the counting shall be three. Four shalt thou not count, nor either count thou two, excepting that thou then proceed to three. Five is right out. Once the number three, being the third number, be reached, then, lobbest thou thy Holy Hand Grenade of Antioch towards thy foe, who, being naughty in My sight, shall snuff it.'
  • Another 20 years, and I'll wear this computer on my wrist.
  • I never suggested that it could practically happen there (and was deliberately ignoring MC, as I never understood quite who owned and ran it), I was merely trying to think of a town with a enough large universities.

    If just throwing more CPUs at the problem doesn't ever help, how come DESCrack, RC5, GIMPS, PiHex etc. work?

    (Yeah, I know the answer, it's because they are 'embarrasingly parallelisable', with an Ahmdahl coefficient of nearly 1)

    FatPhil
  • It's the Accelerated yadda yadda, not that it really matters. Blue is online already in the form of two machines: Mountain Blue (SGI/Cray) here in Los Alamos and Pacific Blue (IBM) at Lawrence Livermore. Red (Intel) has been around at Sandia as the first of the ASCI machines. Initial delivery for Q (built for Los Alamos by Compaq) is expected to begin in a couple of months. For more information, see http://www.llnl.gov/asci/

    Jim, who actually gets paid to use these bad boys.
  • If you go to the top500 site, you can now get a list just of clusters (which I think is a new feature).

    The ones listed as "self-made" are the most likely Beowulfs. Sandia Labs has a 580 processor system (#84). T.U. Chemnitz has a 528 processor PIII system (#126).

    Not a cluster, but Charles Schwab is at #15 with an IBM Power3 based system. They also have a 1500 processor 604e based system at #34. Think someone thinks you can predict the market?
  • The +1 was an accident, the default should be off in my book.

    If it was karma-whoring, how come it gained no karma?

    I was just trying to get things in perspective - there's a _hell_ of a lot of CPU power out there, the only thing that's special about this is that it's all under one roof.

    FatPhil
  • ATM Machines, PIN Numbers, gigaflops per second.

    FLOP = FLoating Point Operations per Second.
  • Computer hardware is measured on sports court size. However evidentally the medical industry has switched to some sort of fruit scale, i.e. a tumor the size of a grapefruit...
  • That was of course a joke. I was mocking people who say things like that.
  • expect the first non-'nix os to support this kind of power to be os/2...

  • Actually, a government analyst published a paper a little while ago about the optimal slack time, that is, how long you should wait before buying a computer on a limited budget. The conclusion was that you should buy your system when it's powerful enough to finish your computations in 26 months. Any longer than that and you're better off waiting. Any shorter and you should stop waiting and actually get out and run the calculations on today's hardware.
  • Embarrasingly parallel implies NO communication is required. A typical application that runs on one of these machines, and yes, not only have I seen them, but I have written them, and am developing one right now, is an explicit PDE integration scheme that requires the communication of pseudo boundary conditions across the decomposed sections of the solution domain. Communication costs are very nontrivial in such a "best case" scenario.

    As algorithms grow in complexity, for example to add adaptive capability, communication costs rise correspondingly. The scalability of a REAL state-of-the-art application that runs on these machines is not at all linear. Rest assured that you should be impressed with what has been done with the ASCI program, because its damn impressive.

  • by istartedi ( 132515 ) on Friday November 03, 2000 @03:18PM (#650419) Journal

    Yes, 8000 screaming fans in the bleachers of the basketball courts. They're waiting for the geeks who maintain it to vote for homecoming king.

  • drat...I was hoping to make a beowolf cluster joke...something along the lines of it not being necessary...you already have more than enough power...I was also going to couple that into a quake joke...but unfortunately, it is too late....(i was also hoping it would be modded as insightful)

    does this count as a joke? or is it not allowed?

  • an * ti * cli * max (an'-tI-clI-max): A series of statements in some ascending order, ending with a statement clearly lower than each of the previous statements. e. g.: "The ASCI White Computer: 12.3 trillion calculations/second (teraflops), 8,192 copper microprocessors, 6.2 terabytes memory, 512 RS/6000 375 MHz POWER3 SMP High Nodes, IBM AIX operating system."

    fearbush.com [fearbush.com]
  • by ZanshinWedge ( 193324 ) on Friday November 03, 2000 @03:25PM (#650440)
    According to the top500 info, ASCI White has a maximimum performance of 4,938 gigaflops per second, or about 5 teraflops.

    In comparison, the distributed.net project utilizes abut 13 teraflops of computing power and SETI@Home utilizes about 25 teraflops of computing power.

    That should provide a bit of comparison between these mega-computers and distributed computing projects.

  • An interesting question would be what is the fastest single processor (including ASCI's speed divided by 8192) currently in existance. Is it still a Cray? Are the individual processors that make up the ASCI particularly impressive in themselves?

    Fastest single processor depends heavily on what the problem is. Doing vector ops, a Cray's going to be a lot more impressive than if it's doing floating point. For some things, the AMD's 1.2 Ghz Athlon is almost certainly the fastest available (i.e., actually for sale as opposed to just in a chipmaker's labs). As for ASCI-White, according to IBM, each processor is a 375 MHz POWER3. [ibm.com]

  • The nodes themselves are just regular tower sized rs/6000 (ok yes turned sideways) If they are wide nodes and in a standard sp frame, you can fight 8 stacked in a frame. Each fram is say 6 or so feet high and 4.5 feet square (approx.)

    Vermifax
  • The system is for "apply[ing] science and technology in the national interest, with a focus on global security, global ecology, and bioscience." [llnl.gov]

    So, what exactly will the ASCI White, SP Power3 375 MHz be doing? BTW, I noticed that LLNL also has the 3rd, 32nd and 36th fastest systems. I assume that Los Almos would be conducting simulated nuclear explosions and what not.

  • If you can't buy it at Wal-Mart what good is it?

  • The machine has 8,192 CPUs
    Isn't the term CPU (Central Processing Unit) a complete misnomer when applied to a machine like this?

    8,192 processors, sure, but central processing units?
  • by Daemosthenes ( 199490 ) on Friday November 03, 2000 @02:25PM (#650452)
    All this for ASCI!?!

    Aw crap...that means I'll have to go pull out my conversion charts. What the hell was the number for a smiley face again?!



    54% Slashdot Pure
  • by DzugZug ( 52149 ) on Friday November 03, 2000 @02:25PM (#650460) Journal
    I move that the following comments be banned from this board:
    1. imagine a beowulf cluster of these.
    2. what a great quake platform.
    3. anything relating to distributed.net

    All in favor say "aye"

  • Isn't that reporter's analogy way off? IIRC, Doom uses ray-casting instead of "true" 3d rendering, which is why walls have to be upright, etc. (I may be (probably am) completely off here; I've never played it, but still...)
  • Six million, seven hundred fifty-eight thousand, four hundred BogoMips. (clock of 375, PPC fudge factor of 2.2, 8192 processors.

    What I'd like to know is how much green-bar the sucker eats up spitting out boot-time kernel messages. 512 CPU host adapters to initialize and 8,192 CPU's to calibrate delay loops for could get pretty space consuming..
  • by HiyaPower ( 131263 ) on Friday November 03, 2000 @03:09PM (#650476)
    A reasonable alternative to the IBM machines is Sun's architecture. Unfortunately their most recent effort in Australia has gone somewhat sour [theregister.co.uk]

    Such machines are all very well and good, but there will be serious competition from the Seti sort of model for those things that can decomposed correctly.

  • ...that 10 years from now some kid with the latest Nintendo game will be able to say "a computer like this used to take up as much space as two basketball courts".

  • by rangek ( 16645 ) on Friday November 03, 2000 @03:44PM (#650482)

    That should provide a bit of comparison between these mega-computers and distributed computing projects.

    That is nice and all, but can you use distributed computing to run a molecular dynamic simulation, an electronic structure calculation, forecast the weather/stock market, etc.? Distributed computing only works for embarassingly parallel problems. They call it embarassing because you should be embarassed to brag about the FLOPS you can pull for that problem.

    PS, I am not saying distributed computing is bad (I have personally contributed just over 40 P90 CPU years to GIMPS), but comparing ASCI White and dnet is just wrong. They are two totally different things.

  • "weighs 106 tons and takes up two basketball courts' worth of floor space."

    With the way things are going, in the future, Apple will package it in a convenient eight-inch cube, complete with cosmetic cracks and a toaster-style dvd drive that will finally run hot enough to toast things.
  • 10% - "Imagine a Beowulf cluster of these!"
    25% - Making jokes predicting "Beowulf cluster" posts. (Yeah, like this one)
    35% - Random, (-1, Offtopic) crap.
    15% - IBM Sucks, [Company] is better!
    14% - "Can I buy one on eBay?"
    1% - Insightful commentary, such as a discussion of whether big, centralized systems are still relevant today, or whether the rankings in the top 500 list are based on the most appropriate criteria.

    --
  • That is right a CPU is what you plug the network cable into and then when you call the help desk they tell you to turn the computer off and you do and then you turn it back on and everything is the same so they explain that they wanted you to turn the CPU off so you turn off the CPU and of course to install software you put the CD in the CPU. That is how you are supposed to use the term. :)
  • Damn that is almost enough number crunching ability to compute all of my character faults and tell me why I am sitting home on a friday posting to slashdot instead of being out on a date.
  • "Where a computer like the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and weigh only 1 1/2 tons." --Popular Mechanics, March 1949

    "Where a computer like the ASCI White is equipped with 8,192 CPUs and weighs 106 tons, computers in the future may have only 1,000 CPUs and weigh only 1 1/2 tons." --Cnet News, March 2001

  • The answer to your "what is the fastest single CPU out there" can probably be found at the Hot Chips [hotchips.org] web site.

    My guess is that the Japanese (NEC or Fujitsu) are the current leaders, as they have continued to build highly vectorized processors - along the lines of what Cray used to do in the past.

    Another thing to keep in mind is that these machines are very rarely run in a mode where a single application is using all of the machine. I work on these machines (currently ASCI blue), and the real payoff is that a dozen or so people can be running moderately parallel jobs all at the same time.

  • I could fit a lot more than 10,000 CPU's in the space of 2 basketball courts. The connecting circuity must be immense. Or maybe its the cooling system that takes up so much space. :)
  • But the thing is, I PLAYED Quake on it, and I'll be the first to tell ya', it wasn't that hot. No, seriously, it wasn't. Turns out that although it has 8,000+ processors, it only has one 32meg EDO chip of RAM and a 128k bus -- what were they even thinking?
  • by andyh1978 ( 173377 ) on Friday November 03, 2000 @02:32PM (#650511) Homepage
    What OS does this supercomputer run on? What OS supports that many proc.(s)?
    This link [ibm.com] to Big Blue's ASCI White website gives the answer:

    ASCI White
    • 12.3 trillion calculations/second (teraflops)

    • 8,192 copper microprocessors

    • 6.2 terabytes memory

    • 512 RS/6000 375 MHz POWER3 SMP High Nodes

    • IBM AIX operating system

  • It's all about the coupling.
    20000 ordinary PCs could out-compute it.
    However, taking into account Ahmdahl coefficients (how efficiently a multi-processor or multi-computer parallelises for a particular problem), and the fact that inter-computer connections would be both slow _and_ very high latency. I reckon:

    This thing has about the combined computing power of all the Universities in Manchester[*] combined.

    Doh? That ain't that great. It's simply the fact that they've got them under the same roof that's the 'impressive' bit, and I'm not that impressed.

    This is a 'problem' that can be pretty much solved simply by throwing money at it. That ain't rocket science...

    FatPhil
    (In cynic mode, as there are no Axp processors involved)

    [*] Manchester, UMIST, Salford, MMU.

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...