Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Compaq

Compaq To Build DEC Beowulf Supercomputer 99

Tower writes: "Compaq Computer (Digital) and the Pittsburgh Supercomputing Center have won a $36 million contract to build a 2,728-processor supercomputer using 1.1 GHz EV68 processors in a 682 node Beowulf setup. Check it out here." This is a different machine than this one: That one was supposed to be used to calculate nuclear explosions, this one will be used by the National Science Foundation to work on biophysics, global climate change, astrophysics and materials science, according to the article.
This discussion has been archived. No new comments can be posted.

Compaq to Build DEC Beowulf Supercomputer

Comments Filter:
  • by Tracy Reed ( 3563 ) <treed@ultra v i o l e t.org> on Thursday August 03, 2000 @11:19PM (#880519) Homepage
    Home computers (Linux systems) CAN share disks like this if you want to invest in fibrechannel (which may not be as expensive as you think). Check out:

    http://www.globalfilesystem.org [globalfilesystem.org]

    Very cool technology. I have been following this for quite a while and it shows tremendous promise for solving all kinds of disk scalability problems.

  • Yeah I believe the AI department at our university do something simliar with their workstations when they have lots of calculations to do.

    They've got a tonne of ultra10 and a few ultra60 machines and as I understand it they just start idle priority threads in the background of everyones machine.

    However i'm sure they run down to play with the supercomputers on our campus when they get bored :)

    MMmmm if you like big computers look at this [ed.ac.uk] but it looks far better in real life :)
  • Well, the Höchstleistungsrechner in Bayern [lrz-muenchen.de], which is 5th on the top 500 list (and stands about 50m from where I'm typing this :) uses up 600kW. They had to reinforce the floor above it before they could install the cooling systems, before they could install the computer itself.
  • Wish I could use this baby to improve my rc5 stats. Oh well, just have to put up with 1/20 of the processing power for just one of its node!!

    :P
  • Hey, could the Guys at Slashdot get some kind of nightshift going?

    There have been three articles posted since I went to bed 11 hours ago, one two hours ago, the other 4 hours before that. And the one before that was four hours earlier.

    Failing that, can we have a European slashdot or something?

    Rich (just about to lose his +1 bonus I think)

  • Security is one good reason for having it on a single computer. I somehow suspect that the people who would like to model nuclear explosions wouldn't like their models and information available to various other parties. Keeping it on a dedicated beowulf cluster helps contain the information in a way that even doing distributed processing across a classified network cannot do. As other posters also pointed out, these processes require more interprocess communication than is convienient in the distributed processing model, but not so much that vector processing or the interconnections in say the Cray T-3E would require. (And at a fraction of the cost of a T-3, I suspect many people are more forgiving of speed issues.)
  • According to their website... "PSC operates five supercomputing-class machines: a 512 processor Cray T3E, two eight-processor Cray J90s, a four-processor Alphaserver 8400 5/300 system, and an Intel cluster with 10 4-processor compute nodes."

    This page [psc.edu] provides a description of the work researchers plan to do with the new supercomputer.

    The center is a joint venture between Carnegie Mellon [slashdot.org], The University of Pittsurgh [pitt.edu], and the old Westinghouse Electric [westinghouse.com] company.

    It's also intersting to note that the PSC & CMU formed the NCNE Gigapop [ncne.net] that provides the internet to CMU, PITT, WVU, and Penn State.
  • It would rock to hook all up all the supercomputers around the world into a distributed network. So, a distributed network of Beowolf clusters!
  • It might come in handy for DNA analysis.
  • by Shimbo ( 100005 ) on Friday August 04, 2000 @02:24AM (#880528)
    If you want massively parallel systems then I would honestly think that something like processtree would be a good solution since you can rent a phenomenal block of cpu time.

    A lot of these problems, like climate modelling can be worked on by partioning the problem into cells. You just need to fix up at the edges, on each iteration though. Independent systems but joined together, particularly with a low latency interconnect fit this sort of problem space well.

    Obviously, there are some problems, where the dependencies between the data sets are nil, where commodity Intel/Athlon/Alpha Linux boxes are ideal. Still more where the are cost-efficient ;)

    Supercomputing facilities are best equipped with a mixture of these. For some jobs a steamroller is better than a Porsche. When you've got a specific requirement, and lots of money is involved, off the shelf components are not always the best bet.

    they surely lack the memory bandwidth that makes traditional mainframes and supercomputers so powerful.

    Yes, but these aren't Beowulf clusters. Quadrix hardware is not some cheap and cheerful solution like switched Gigabit Ethernet ;)

  • But the quote says "souped-up version of...". Sounds to me like it's not using Beowulf, but some sort of souped-up version of it. Heh.

    ---------------
  • I doubt it. Compaq's Tru64 v5 supports VMS-style clustering -- VMS having had clustering about 20 years ago. I suspect it's based on the new Tru64 version, perhaps with some extra software.
  • So . . .do you need to buy any more?
  • by Black Parrot ( 19622 ) on Friday August 04, 2000 @05:24AM (#880532)
    > I dunno -- seems to me like the author is saying that it really is a Beowulf cluster.

    I took it to be a "Beowulf clone" or a "Beowulf-style cluster". AFAIK (please correct me!), "Beowulf" refers specifically to a GPL'd Linux kernel hack, and thus any "Beowulf cluster" would be a Linux cluster. But I would assume it would be more or less straightforward to implement on Unices, at least for parties who have the source code, in which case I would call it a "Beowulf type cluster", or give it a new name altogether. But perhaps the term has been generalized; I think it has already generalized once from refering to "the" Beowulf cluster (the original one), to refering to all clusters built with the same kernel patch.

    OTOH, there was a [epithet of your choice for a moron here] on the Beowulf mailing list for a while, who was adamant that his NT cluster was a "Beowulf" system. I never figured out why he even subscribed, since any exchange of information there would be completely irrelevant to his situation. Shows the importance of bragging rights in the IT world, I suppose.

    --
  • I'd love to see a beowulf cluster of these ;-)

    Cyano
  • From the Beowulf FAQ [dnaco.net]:
    It's a kind of high-performance massively parallel computer built primarily out of commodity hardware components,
    running a free-software operating system like Linux or FreeBSD, interconnected by a private high-speed network. It consists of a cluster of PCs or workstations dedicated to running high-performance computing tasks. The nodes in the cluster don't sit on people's desks; they are dedicated to running cluster jobs. It is usually connected to the outside world through only a single node.

    (Emphasis added.) The Compaq machine runs Tru64 UNIX. [tru64.org]

  • > i bet it still won't run quake3 all that nicely

    Depends on the graphics card you order with it.

    --
  • > Hey fella, "Beowulf" has noting with Linux to do at all, Beowulf actly means "a cluster built of cheap commodity parts connected together with switched Fast Ethernet"

    From the Beowulf FAQ [dnaco.net]:
    1. What's a Beowulf? [1999-05-13]


    It's a kind of high-performance massively parallel computer built primarily out of commodity hardware components, running a free-software operating system like Linux or FreeBSD, interconnected by a private high-speed network.
    The more general term is NOW, Network of Workstations, which includes Beowulf, Beowulf-like systems on non-open OSes, and perhaps other types of cluster as well.

    So strictly speaking, this is not a Beowulf. Of course the meaning of the term may be drifting, as with "hacker" and "cracker". (Languages do that.)


    --
  • Just for the moment, assuming that you've got an ideal application and totally ignoring all other factors, what is the cheapest MIPS source?

    Would you get more MIPS/buck out of massive piles of $5 microcontrollers, or out of, say, K6 500 MHz chips with cheap MOBOs?

    Again, just totally ignoring all other factors, no matter how silly you think that is.

    Personally, I'd like to hijack a top-of-the-line fab and put grids of hundreds of little computers, each with a few K of memory, on dies that would normally be used for one microprocessor. I don't know what I'd do with them, but I'm sure I'd find some cool app like massive neural nets.

    Ahhh... to set up a massive pile of millions of parallel processors that could start from "I think therefore I am" and get all the way up to deducing the existence of rice pudding and income tax before I hook up the data banks...

    ---
    Despite rumors to the contrary, I am not a turnip.
  • I agree with the previous poster replying to your message... I've spent some time at my work working with clusters.. I'm a Technical Intern at a branch of a rather large computer hardware company and it seems that my job this summer is to learn as much about as many types of clusters as I can.. I came into it thinking what most others think of a Beowulf cluster ... Complex system with big kernel hacks and such... Man was I wrong. A beowulf cluster can be as simple as a few computers running linux, each able to rsh to each other, and each running a compiled version of LAM/MPI (as the first cluster I made was).

    The master node sends a work message to the clients, they work on it and send the result back. Using the programs, the LAM/MPI message passing starts the program on all nodes (ie. rsh to each client and runs the program) .. Then the master sends out the work request.. the clients do the calc. then send it back to the master. Simple as that (more or less...)

    Other cluster types such as Mosix uses a kernel "hack" to send processes among nodes at the "kernel" level (not the correct terminology, but I dont know it well enough.) Also failover/high availability clusters are often used in server farms to take over when a server goes down or to keep up with the load ..

    So in all, a Beowulf cluster is just one of many many types made for a specific task: number crunching. I could go into a lot more depth.. Heck, ive been paid to read up on clusters and try it out myself.. I simplified a lot of what i said.. Email me if you want more depth.. But either way, its still neat to say I built a Beowulf cluster.. ;)

    -Daniel
  • At 110 V this thing would eat 1860 Ampere

    And to think the labs at work kept tripping a 75 amp breaker.... Sheesh. BTW, the sounds of a few hundred computers all shutting off at once is neat... The shrills of the engineers that follows is even better..

    -Daniel
  • by Idaho ( 12907 ) on Thursday August 03, 2000 @11:32PM (#880540)
    I always wonder how much power goes into these kind of beasts.

    Let's try to estimate it: 682 systems each containing 4 processors. I guess that they will need a 300 W power supply. So that makes about 204 KW just for the computers (when working at full speed only, OK)!

    At 110 V this thing would eat 1860 Ampere, not something you'd like to try at home or something (imagine the electricity bill :-)
  • Nah, this was posted at 09:21!
  • What humor-impaired person scored this one? Should've rated funny and insightful.
  • >Real supercomputers solve problems that require massive communications between the nodes.

    Unless they are data & process partitioned/independent processes.

    >So pretty much everything depends on the "switches" they'll use to connect the nodes

    This still does not guarantee good performance. An R/S6000SP has a fantastic interconnect, but can still run like a dog if there are too many processes dependent upon each other.

    Not that what you are saying is wrong. You obviously know something about the subject (I've been reading your posts):-) but a good app hopefully does not have too much communication between nodes, or serialized data streams.
  • Wow, I wish someone could build a Beowulf clus......oh shit, never mind...
  • Not if you compare it with this machine (which is ranked 5th on the top 500 supercomputers list).

    Ok, that one is faster... (>770 MB/s internode using MPI, no mentioning of latency). But it doesn't qualify as a beowolf-style machine; it is all specialized Hitachi stuff.

    This compaq is in my opinion 'beowolf-style': it uses standard 8-way SMP machines using PCI network cards and fast switches for interconnection. For this, the QSW products still look impressive to me.

  • ASCI White with its 8192 (16 * 512 nodes/systems) Power3-III processors uses 1.2 megawatts. And Alphas are pretty power-hungry chips, so compare from there.
  • by Anonymous Coward
    I had occasion to visit the EPCC computer room a couple of years ago..at that time they had 3 Crays, one of which was not being used (powered off) - and the reason it was still there was that they had installed a newer machine & they couldn't get the old one past it and out of the door.
  • the wise guy who spawned 2728 SETI clients?

    --- Never hold a dustbuster and a cat at the same time ---

  • I had a quick read through the Global Filesystem HOWTO.
    They're HBA list is not up to date, or they are unaware of the JNI [jni.com] adaptors.
    These guys also have the first 2 GigaBit FC HBA
  • Not only that, you get to implement a nice power backup system, probably a massive array of batteries, backed up by a few 1kw generators.. Then there's the obvious issue of cooling, granted my experience with alphas is (sadly) limited to several-year-old models that are EV5's :\, but they actually do really churn out a lot of heat, especially your disk arrays.. (not specific to alpha, i know..) can i admin them? i'll work for free ;)
  • As I recall from some other work I've done, one application that scales well to MPP is heat distribution of a component that generates heat and has a custom heat sink. Once you can model nuclear explosions, it doesn't take much imagination to come up with other molecular modelling problems that can be used for such a system. I'm sure someone could write a model of a biological system that had little enough communication between regions of the biological system as to be feasible in a MPP design. (The uses of being able to model arbitrary and theoretical biological regions should be obvious). Then there are the classic mathematical problems that could find a MPP supercomputer useful. I'm not sure offhand how I'd parallelize the code, but the twin prime conjecture would be one example. (Probably parallelize regions of the number plane). Research into the factoring of large numbers is also feasible (of course we don't know any practical application for that...)
  • by chamber ( 218962 ) on Friday August 04, 2000 @01:30AM (#880552) Homepage
    As a matter of fact, we've just set up a new Beowulf cluster that people can play with. It's got 9 DS10L's (1U rack-mountable Alpha systems), each with a 466 MHz EV6 Alpha, 256 MB of RAM, and two 10,000 RPM UltraSCSI drives. If you're interested, stop by http://www.testdrive.compaq.com/ [compaq.com], where you can get all the dirt and get a free shell account on it and our other Test Drive machines.

    Yes, I work for Compaq. No, I don't speak for them.

  • This page [psc.edu] on the PSC website gives detailed descriptions of the planned uses of this supercomputer...

    Including Storm Prediction [psc.edu], Protein Folding, Turbulence Studies, Earthquake Preparedness, AIDS Research, Cardiac Fluid Modeling, Oceanic Phenomena, Electromagnetics and Fluid Dynamics.

    They've also got some pretty neat animations of some of all of the above.

  • by dehuit ( 57744 ) on Friday August 04, 2000 @01:34AM (#880554)
    I have to wonder what the point is in massive beowulf clusters like these. Sure they are fast and give you more Mips than flanders next door, but they surely lack the memory bandwidth that makes traditional mainframes and supercomputers so powerful.

    If you want massively parallel systems then I would honestly think that something like processtree would be a good solution since you can rent a phenomenal block of cpu time.

    Well, obviously these machines are something inbetween the extremes you mention, and there are applications for which this is sort of a sweet-spot.

    I have used an application for which this type of machines are excellent: molecular dynamics simulations.

    The usual strategy for this type of software is to partion your system by giving every proc a share of the atoms. Then you start calculating forces and motions etc for each part for a short time period, and then compare them. Many forces extend to neighbouring parts, and atoms can move to other parts, so quite a lot of communications between the nodes is necessary. After exchanging this info, each node can compute the next timestep. This works quite well if most interactions between atoms are relatively short.

    This type of app is excellently suited for a large cluster. It is naturally suited for message-passing, so programming it using MPI is easy. If you partion the system well, the memory use of one node is quite small, and fits for a large part in cache. IO between nodes has to happen quite often, so latency is a problem. So processtree is obviously no option.

    These simulations scale quite well to larger molecular systems. Unfortunately, many researchers don't want more atoms in their systems, they want the simulation of their small system done faster. Unfortunately, this scaling is bad; if you end up with only a few atoms per node, the communication overhead boggs it down.

    FYI, here [chem.rug.nl] are some old benchmarks of the software i used (gromacs). Although this software is considered to scale excellent, a 64 node machine is only 32 times as fast as a single-node machine...

    Sorry if all this is incrompehensible, i guess i want to say too much too fast...

  • Wow.. imagine a bunch of individual nodes of this one...
    --
  • I think most of the supercomputing centers in the US are connected to the vBNS [vbns.net], so that would be a start.
  • That one was supposed to be used to calculate nuclear explosions, this one will be used by the National Science Foundation to work on biophysics, global climate change, astrophysics and materials science...

    Yes, exactly:

    • biophysics: what are the environmental impacts of a nuclear blast?
    • global climate change: How will the global temperature be affected by the tons of dust thrown into the atmosphere by a nuclear blast?
    • astrophysics: how can we more effectively deliver nuclear payloads?
    • Materials science: What's the best material to use in the construction of nuclear weapons?
  • We bought a quad processor system from Compaq and two of the processors were set to run at a different speed from the other two. Caused weeks of fun tracking down what was causing the network to keep disappearing. Another machine had the processor card inserted incorrectly cauing random crashes. God know's what they'll get up to with 2,728 to screw up.

    Rich

  • Hehe - if your karma is over the karma limit, your karma will stick there. Gotta love slashdot. Any cool ideas for things to post with the bonus?
  • Seti units coming up sir!
  • Reflective memory? This sounds like it would inhibit performance, unless you are totally unconcerned about cache coherency (the fact that what is in cache accurately represents the current state of main memory. Or that if x=5 in memory then x=5 in cache. And if it isn't halt whatever it is that you are doing, flush that cache line, and get the correct value...)

    People have pointed out that the network latency and bandwidth are often the limiters in this kind of setup. I would like to point out the next bottle neck to scaled speed-up would be memory bandwidth and cache-reuse. Fetching from main memory (RAM) costs on the order of 300 clock cycles, while using stuff in cache only takes 1.

  • by Anonymous Coward
    i bet it still won't run quake3 all that nicely
  • Real supercomputers solve problems that require massive communications between the nodes. So pretty much everything depends on the "switches" they'll use to connect the nodes, and there's no realy information about those at all in the article. At least, they seem to be custom-built by a company the sepcializes on such things.
  • but they surely lack the memory bandwidth that makes traditional mainframes and supercomputers so powerful.

    But I bet it has a great linpack score. Benchmarks don't lie.

    -jlg

    ps. use Debian! www.debian.org

  • by Wookie Athos ( 75570 ) on Thursday August 03, 2000 @10:29PM (#880565)
    If you read the C|Net page carefully you will see it says the machines are to be 4-CPU Compaq boxes running Tru64 Unix.

    The writer did mention Beowulf, but only to say that it was similar.

    __
    Conclusions are easy to jump to. Just be prepared to jump again...
  • Monte Carlo routines use psuedo random numbers. The more of these you can generate per time, the better your results get. Hell, I've used windoze boxes to do this. As long as I give each machine the right seed, they don't need to know what the others are doing. The results can be put together later.

    These are commonly used in particle transport routines where directions, interactions, and birthplace information can be simulated by generating a random number and then comparing that number to a known statistical behavior. It's powerful and suprisingly easy thing to program. It can be slow, but brute force computing is making all sorts of problems practical. MCNP is a very mature code that uses this.

    They can also be used to solve multidimensional integrals. Here they rule, and time savings over other methods are very good.

  • by Cebert ( 69916 )
    Oh, I get it...redundant. Very funny. >:D
  • by dehuit ( 57744 ) on Thursday August 03, 2000 @10:46PM (#880568)
    At least, they seem to be custom-built by a company the sepcializes on such things.

    Which has an excellent product page here [quadrics.com]. 2.35 usec latency for a short message. 340 MB/s peak, 210 MB/s sustained throughput. Fault tolerant redundant links. Tru64, Solaris and Linux support. I know nothing about this, but it sounds impressive to me.

  • by grahamsz ( 150076 ) on Thursday August 03, 2000 @10:53PM (#880569) Homepage Journal
    I have to wonder what the point is in massive beowulf clusters like these. Sure they are fast and give you more Mips than flanders next door, but they surely lack the memory bandwidth that makes traditional mainframes and supercomputers so powerful.

    If you want massively parallel systems then I would honestly think that something like processtree would be a good solution since you can rent a phenomenal block of cpu time.

    Each of these 682 nodes will be running Compaq's Tru64 Unix, which is capable of sharing a single file system

    Wow if only home computers could share disks like that!!! This actually makes me think that the nodes are operating as independant computers rather than part of a whole... but hey i'm probably wrong :)

  • These machines are basically MPI boxes: they run an optimized MPI implementation (not on top of TCP/IP) that takes advantage of the special features of the underlying switch , such as reflective memory, where memory writes on one node automatically appear on all other nodes, hardware broadcasts to all nodes, etc.

    Alastair McKinstry,
    AlphaServer SC Engineering (who make these machines)
    Compaq.
  • by Anonymous Coward
    Cool, wonder if the will use GSN aka HIPPI-6400 for the node networking.
    GSN can crank out 6.4 Gbit/s! It makes Gigabit Ethernet look like a turtle. :-)

    GSN webpage [hnf.org]

  • by amck ( 34780 ) on Friday August 04, 2000 @12:10AM (#880572) Homepage
    The article was vague with the 'souped-up beowulf'. These AlphaServer SC machines are not just connected by fast ethernet, they share a Quadrics switch that provides ~200 MB/s bandwidth with 5us latency per node.

    Alastair McKinstry
    AlphaServer SC Engineering, Compaq.
  • Dude, it's kind of redundant because this story actually comes from the imagine-a-beow-aww-never-mind dept. And the joke's a little old too...!
  • What you're talking about is internal bus bandwidth (actually, a crossbar rather than a normal bus). What the poster above is talking about is network bandwidth; if you can show me anything with better bandwidth over a network than the HIPPI stuff he mentioned, I'd sure like to know about it.

  • Compaq builds big computers
    Big computers build even bigger bombs
    Bombs blow up computers

    Yes I know I'm lame. -1 Redundant me or something.
  • Oops. Excuse me while I insert my foot in my mouth. You were obviously speaking about whether it would be likely for the internal bus to have lower bandwidth than the network. Sorry, my bad.

  • OH, come on!

    It doesn't run linux. It doesn't use beowolf. It's it's own completely different beast, yet somehow you've managed a way to connect it and Linux in this discussion and somehow pat Linux on the back for Compaq's engineering feats?

    Get over it... Linux isn't anywhere on the map of this discussion. Tru64 Unix is, though, but how much do they have in common besides being differnt branches off the Unix family tree?
  • Unless they are data & process partitioned/independent processes.

    In that case you don't need a supercomputer and a Beowulf cluster built of off-the-shelf PCs really is a good solution.

    This still does not guarantee good performance. An R/S6000SP has a fantastic interconnect, but can still run like a dog if there are too many processes dependent upon each other.

    True, but from what I've gathered (no personal experience, admittedly), volume and latency of interprocess communication is much more common as a bottleneck than actual unresolvable dependencies.

    but a good app hopefully does not have too much communication between nodes, or serialized data streams.

    I bet that a lot of researchers whish they could choose to only work with "good apps" :)

    But certain classes of important problems (mostly simulations of chemical of physical processes) tend to be "bad"...

  • Notice how the specific purpose was not mentioned; just that it would be available for use for academic purposes. How many applications are there for a cluster like this? There aren't many projects that have the task architecture needed for it to be useful.

    I can see (possibly) weather prediction, but what else? It sounds like they had a chunk of money left over and thought that a supercomputer would be cool to own.
    Of course, I'm all for wasting money in the name of science. :-)


    --
  • Depends on the job. Just because a problem is thrown at a supercomputer doesn't mean it necessarily requires massive interprocesser communication. It was the number of jobs appearing that could be readily broken down into multiple jobs that gave rise to the Thinking Machines line of MPP supercomputers. Nuclear explosions are surprisingly independent in terms of level of interprocessor communication.

    Obviously you can't just throw 500 PCs running the Beowulf kernel and call it a supercomputer. You do need dedicated high speed networking, and clearly not all jobs parallelize readily to that model. For a special purpose computer however, Beowulf is quite acceptable in the case of relatively minimal communication between the processes compared to the communication needed in say a vector processor based supercomputer.

    One company I've heard of that builds these Beowulf clusters is http://www.hpti.com/ From what I've heard, they do use some heavy duty connections between the nodes.
  • Yup, kind-of.
    The problems themselves have 'Amdahl factors' (or 'coefficients', horrible word), which describes how they behave when partitioned to be run multi-processor (multi-computer too).

    An example of the factors which affect this is this: (please don't pick holes, it's simplified :-) )

    Let us say one machine can efficiently handle a working set of 1000 chunks of data.
    If the data is tabular, then its interface with its four neighbours is 4*32 = 128 units. So an eighth of the data needs to be exchanged with neighbours each iteration.
    If the data is 3 dimensional, then its interface with its neighbours is 6 * 100. So 60% of the data will need to be exchanged. (I'm assuming each machine takes a cube of data).

    If the machine can handle 1000000 chunks in its working set, then the 2D problem has interfaces requiring 4000 units to be exchanged, and the 3D problem requires 60000.

    So we see indeed that 3D models (fluid flow etc) do have greater inter-processor communication requirement than 2D models do.
    We also see that if you can partition the problem into larger jobs, the communication becomes less of an overhead.
    That is probably why the focus is on very high end processors in these arrays. 3 times as many Xeons for the same price probably wouldn't be worth it...

    FatPhil
  • Could you imagine a world where different parts are in different time zones?
  • When the only tool that you've got is linux, every problem looks like a Beowolf cluster...
  • Fuck, that must have taken you weeks to think up. Now try having an original thought.
  • You should say something about linux credit cards...
  • There's a spectrum of computing problems that are commonly attacked with large-scale systems, including traditional supercomputers and clusters. On one end, you have the high-end supers, with that beautiful I/O and memory bandwidth. They serve a valuable purpose for calculations that require a lot of interaction between routines -- think a big system of equations, where every variable appears in several of them.

    On the other extreme are widely distributed systems, like SETI@home. There, you get decent performance by splitting your data set into completely independent, smaller batches, and farming them out a chunk at a time to smaller systems, which then report back to a central aggregator. Efficiency-wise, this method loses a lot, because every client usually has to completely duplicate the functionality and overhead of the entire processing application; however, the low cost and high availability of processing cycles can make that almost immaterial.

    These big clusters fit somewhere in the middle. Especially in this case, where each node is a highly capable system in its own right, you can give more complex and varied instruction sets to each unit, and offer decent I/O bandwidth to the others. The clustering tools give the system a maintainability and level of transparency better that simply running the same application on many machines, and since each node can be given a completely seperate set of instructions, the redundancy of code is less.

    The other key advantage to one big cluster, instead of a processtree-like distributed solution, is the real-time control and reliability of the processing. If you have massive job that needs to be completed some time in the next month, an online distributed net might work fine; if you have a data set crunched by five p.m., you want to know exactly how much power you'll be getting, and run the job in one fell swoop.
  • Nooo! Say it ain't so. I put one [slashdot.org] in for you.
  • I am smoking a cigarette imagining the pr0n-serving capabilities of this baby...
    ======================================
  • by Anonymous Coward
    The parallelism is programmed explicitly -- C/C++ or Fortran 77/90 using MPI for parallel communication, or using MPI combined with OpenMP (shared-memory communication) Since this system will use clusters of SMPs having only 4 CPUs, I suspect most users will program using MPI only.

    Since most users are programming scientific codes, and those codes are often dependent on highly tuned parallelized matrix libraries, the answer to whether commercial parallelization packages will be used is probably "not at all". What's more, large systems like this encourage large- scale parallelism (30-100 processes), something that auto-parallelizers accomplish poorly.

    And yes, MPI and PVM are certainly primitive. But there's so little commonality among scientific codes that, aside from the creation of standard scientific libraries, we're unlikely to see parallel programming meta-tools emerge into the mainstream. God knows, it's hard enough to get access to a decent parallel debugger on most parallel machines.

    The problem is the market's simply too small for any tool builders to make a decent living from selling parallel tools. Perhaps a half-dozen firms are trying to do this world-wide, and none are selling their wares into a large fraction of the supercomputing shops. As a rule, the millions go into hardware, not software. (Looks more impressive when you have something tangible to show off to VIP visitors.)

    Your point on the difficulty of programming machines like these is very well taken. It's a VERY BIG PROBLEM that has effectively been entirely ignored by the NSF. The irritating thing is that the machine can be 2-10 times more effective when the code is tuned properly for the architecture. And development time can be reduced by a factor of perhaps 10X when proper tools (especially shared memory) is available. Alas, good tuning tools (and the knowledge to use them) and large shared memory architectures are rapidly approaching extinction.

    Whatever. Grad students and postdocs are poorly paid for a reason. Today's economics dictate that SOMEBODY has to waste great chunks of time in dealing with a poor parallel programming interface. It might as well be they.

    As far as architectural inadequacies such as poor latency or awkward topologies -- fagettaboutit. The supercomputing market is too small to influence archetectural considerations as it did in the past. Clusters of SMPs is here to stay, probably until molecular computing or some other revolutionary technology supplants it. In the end, it's not performance that drives this market, but the vendors' bottom line. The misfortunes of Cray/SGI/TMC/FPS/Convex/KSR/etc/etc are testament to that.

  • The servers are using TruCluster, which is an add-on/associated product for Tru64 UNIX. All the systems share the same disk and config, with only a few things having to be done on each server. They also use a memory channel hub to talk to each other.
  • With Javascript turned off, it comes up blank. With Javascript turned on, it loads about 20 cookies. And it uses frames to do a job tables can do better.

    Don't try to do real web site development with Mozilla and manual hacking. Get Dreamweaver.

  • Not if you compare it with this [lrz-muenchen.de] machine (which is ranked 5th on the top 500 supercomputers list).
  • See http://tru64unix.compaq.com/faqs/publications/clus ter_doc/cluster_50A/TCR50A_DOC.HTM [compaq.com]

    If you look under "Highly Available Applications", it talks about programming for it.

  • Latency is much more important than transfer rates in this context.
  • Wow, decent stuff posted in the middle of the night to /., the first post is not a troll, and this computer might still make it on to TOP500 [top500.org] by the time it exists :-)

    -M5B
  • NSF Awards $45 Million to Pittsburgh Supercomputing Center for "Terascale" Computing An unprecedented system located in Pittsburgh will be the most powerful in the world available for public research. PITTSBURGH - The Pittsburgh Supercomputing Center (PSC) has been awarded $45 million from the National Science Foundation to provide "terascale" computing capability for U.S. researchers in all science and engineering disciplines. Through this award, PSC will collaborate with Compaq Computer Corporation to create a new, extremely powerful system for the use of scientists and engineers nationwide. Terascale refers to computational power beyond a "teraflop" - a trillion calculations per second. While several terascale systems have been developed for classified research at national laboratories, the PSC system will be the most powerful to date designed as an open resource for scientists attacking a wide range of problems. In this respect, it fills a gap in U.S. research capability - highlighted in a 1999 report to President Clinton - and will facilitate progress in many areas of significant social impact, such as the structure and dynamics of proteins useful in drug design, storm-scale weather forecasting, earthquake modeling, and modeling of global climate change. The three-year award, effective Oct. 1, is based on PSC's proposal to provide a system, installed and available for use in 2001, with peak performance exceeding six teraflops. To achieve this, PSC and Compaq proposed a system architecture, based on existing or soon to be available components, optimized to the computational requirements posed by a wide range of research applications and which, at this level of performance, pushes beyond simple evolution of existing technology. - more - The brain of the proposed six teraflop system will be an interconnected network of Compaq AlphaServers, 682 of them, each of which itself contains four Compaq Alpha microprocessors. Existing terascale systems rely on other processors, but extensive testing by PSC and others indicates that the Alpha processor offers superior performance over a range of applications. Development of this system will draw on a history of collaboration between PSC and Compaq, and represents an extension of PSC's history of success at installing untried, new systems - resolving the myriad of unanticipated hardware and software glitches that come up - and turning them over rapidly to the scientific community as productive research tools. The PSC terascale system, to be located at the Westinghouse Energy Center, Monroeville, will be a component of NSF's Partnerships for Advanced Computational Infrastructure (PACI) program, supplementing other computational resources available to U. S. scientists and engineers. "The PSC has - with its partners at Carnegie Mellon University, the University of Pittsburgh and Westinghouse - an excellent record of installing innovative, high-performance systems and operating them to maximize research productivity," said NSF director Rita Colwell. "We're pleased that NSF's terascale initiative gives us this opportunity to use PSC's proven capability in high-performance computing, communications and informatics in support of the national research effort," said PSC scientific directors Michael Levine and Ralph Roskies in a joint statement. "Working in partnership with Compaq, we'll create a system that enables U.S. researchers to attack the most computationally challenging problems in engineering and science." "Compaq is looking forward to working with the National Science Foundation and the Pittsburgh Supercomputing Center and we are committed to the success of the terascale initiative," said Michael Capellas, Compaq's president and CEO. "With our AlphaServer systems and Tru64 UNIX, we are providing the technology infrastructure for some of the most advanced computing projects in the world. This is further proof of Compaq's leadership in high-performance computing and our commitment to help open new frontiers in science and technology." Development and implementation of the terascale system, including software and networking, will draw on fundamental research in computer science. A significant strength of PSC is its tri-partite affiliation with Westinghouse and with Carnegie Mellon University and the University of Pittsburgh and the pooled computing-related expertise of faculty and staff at both universities. "This award, which comes as the culmination of a national competition, recognizes PSC's leadership in high-performance computing and communications," said Jared L. Cohon, president of Carnegie Mellon. "And it provides another key building block for our region's technology future, enhancing our international stature in the development and application of advanced computing technology." - more - "A gap exists between the computing resources available to the classified world and the open scientific community," said Mark Nordenberg, chancellor of the University of Pittsburgh. "It is ideal that PSC, a world leader in acquiring and deploying early the most powerful computers for science and engineering, can contribute to filling this gap. This award also demonstrates the unique scientific strengths that exist in Pittsburgh when its major research universities partner with each other and with leaders in industry." "Today's terascale award is one more in a long list of PSC's major achievements," said Charlie Pryor, president and CEO of Westinghouse Electric Company. "Westinghouse is proud of PSC's contribution to the nation's scientific community and is pleased to have been associated with PSC since its inception." Under the proposal, PSC will by the end of this year install an initial system with a peak performance of 0.4 teraflops. The six teraflop system, which will use faster Compaq Alpha microprocessors not yet available, will evolve from this system. The four-processor AlphaServers use high-bandwidth, low-latency interconnect technology developed by Compaq through a U.S. Department of Energy advanced technology program. The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon University and the University of Pittsburgh together with the Westinghouse Electric Company. It was established in 1986 and is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry. # # # An artist's rendition of PSC's proposed terascale system and examples of potential research applications are available at: http://www.psc.edu/publicinfo/tcs
  • "The new Compaq machine is a sort of souped-up version of a popular and inexpensive supercomputer technique called Beowulf, which links dozens or hundreds of computers connected by a network."

    I dunno -- seems to me like the author is saying that it really is a Beowulf cluster.

    -Waldo
    -------------------
  • I don't dare challenge the need for such a computer (the people who will build and use it are 10 times smarter than me), but I wonder if it would be good to research faster algorithms. Remember the 3-D environment that some 20 year-old made that runs on the power of a Gameboy? If you need a powerful machine for most computing needs, that just tells me that your OS or whatever program your running was poorly made (anything by microsoft right?).

    So without going offtopic, are there any proven conspiracies to make poorly coded (slow) programs just to make you go buy a faster computer? Maybe Compaq is causing cancer to sell the computers that will help cure it! (Maybe I'm an idiot! *smack*)
  • I have been trying to buy an alpha for the last three months but Compaq won't let me. I guess they don't care about people placing $10,000 orders, just those placing $36,000,000 orders.

    It is so hard to get a quote out of a Compaq or a reseller that it is as if they don't want to sell anything. I have been promised for Friday (or maybe Monday) a quote I asked for two weeks ago.

    DEC was the same way of course, but I guess this explains why CPQ hasn't moved in 2.5 years, while SUNW has doubled three times in that period.

    In another stroke of marketing brilliance, the alpha configurator only runs under Windows. Of course it is such a useless piece of crap that once I finally got it running it was of no help.

    And as long as I am ranting about Compaq, wasn't the .18micron 1+GHz chip supposed to be here already? The machine I am being quoted on is the same xp1000a I could buy this time last year (well nearly).

    The alpha processor is absolutely the fastest way to get done the computing jobs I need to get done, but as soon as SAS Software for Linux is available (and the beta is in the mail to me now) and the 2.4 kernel with its proper NFS and LVM features is available, I am ditching Compaq and going with dual Athlons (which had better be out by then).

  • Alphas might be power-hungry compared with a G4 or PIII, but the Power3 chip is a real monster.

  • by -brazil- ( 111867 ) on Friday August 04, 2000 @12:39AM (#880601) Homepage
    Well, those machines are most commonly employed to solve numerical problems (as in: huge systems of differential equations). For that kind of work, High Performance Fortran [rice.edu] can be used. HPF basically consists of extensions to Fortran that allow you to explicitly divide data (i.e. parts of matrices) between nodes and still use standard operations on it. The compiler takes care of the inter-node communication, and if you divided the data wisely, there hopefully won't be too much of it.
  • by gotih ( 167327 ) on Friday August 04, 2000 @12:42AM (#880602) Homepage
    The PSC has a release here [psc.edu]

    I was involved with the pittsburgh supercomputing center [psc.edu] in high school. We were given a grant for processing time, something like $40,000, to compute the heat loss of my community due to improper insulation. Admittedly, I was on the fray of the group but I know they have been using massively parallel systems for a while. They also had an Internet connection which is where I first used Lynx.

    At that time they had a T3D and a "DEC supercluster" which was IIRC 256 Digital Alpha computers. They had some other supercomputers but I can't remember what they were. The supercluster was later upgraded to 512 processors. It seems that this is the same thing, updated and built by Compaq (who bought Digital).
  • Yeah well i was considering the crossbar bus as the amount of communincation happening between nodes.

    And pointing out that large traditional computers have a higher bandwidth than networked clusters.

    Whether this affects performance depends very much on the application, since in things like thermodynamic modelling it does, and rc5 cracking it doesn't.
  • Well they recently decomissioned the Cray t3d... guess why?

    Was it that they couldn't afford the staff to run such a beast... no

    Was it that they couldn't find applications to run on it .. no

    It was because it took about a megawatt of power to run and the decided to put it right in the middle of a great big building so they couldn't get the heat out easily. All the airconditioning round there caused it to rain in the non-airconned corridors :)
  • Compaq is also in bed with and the University of Western Ontario building a 48 processor beowulf (alpha+Linux) [beowulf.uwo.ca]. Compaq seems to be all hot and bothered about supercomputing as of late.

    Now the *exciting news* is that they are teaming together with upto three other university's and build a "Beowulf of Beowulf's" (think 4 of these babys [beowulf.uwo.ca] Connected together through *very* fast network connections, so you can submit your job and "it" would decide if there's too much going on at Western it can queue part of your job up at another university. Thus creating a beowulf of beowulf's

    Baldric [baldric.uwo.ca] the student run beowulf is also (read hopefully) going to be a part of this with our donation of 50 some nodes (just off the truck) from Sprint Canada. (ok that was a blatent plug ;)
  • Here's a copy of the Carnegie Mellon press release on the topic. This is from the CMU 8 1/2 x 11 News, which is also posted on the CMU bboards. It includes some info not in the other articles, so I figure I'll post it here:

    (The "8 1/2 x 11 News" is published each week by the Department of Public Relations. The newsletter is available on the official.cmu-news and cmu.misc.news bulletin boards.)

    NSF Awards $45 Million to Supercomputing Center for "Terascale" Computing

    The Pittsburgh Supercomputing Center (PSC) has been awarded
    $45 million from the National Science Foundation to provide "terascale"
    computing capability for U.S. researchers in all science and engineering
    disciplines. Through this award, PSC will collaborate with Compaq Computer
    Corporation to create a new, extremely powerful system for the use of
    scientists and engineers nationwide.

    Terascale refers to computational power beyond a "teraflop" -- a trillion
    calculations per second. While several terascale systems have been
    developed for classified research at national laboratories, the PSC system
    will be the most powerful to date designed as an open resource for
    scientists attacking a wide range of problems. In this respect, it fills a
    gap in U.S. research capability -- highlighted in a 1999 report to
    President Clinton -- and will facilitate progress in many areas of
    significant social impact, such as the structure and dynamics of proteins
    useful in drug design, storm-scale weather forecasting, earthquake
    modeling, and modeling of global climate change.

    The three-year award, effective Oct. 1, is based on PSC's proposal to
    provide a system, installed and available for use in 2001, with peak
    performance exceeding six teraflops. To achieve this, PSC and Compaq
    proposed a system architecture, based on existing or soon to be available
    components, optimized to the computational requirements posed by a wide
    range of research applications and which, at this level of performance,
    pushes beyond simple evolution of existing technology.

    The brain of the proposed six teraflop system will be an interconnected
    network of Compaq AlphaServers, 682 of them, each of which itself contains
    four Compaq Alpha microprocessors. Existing terascale systems rely on other
    processors, but extensive testing by PSC and others indicates that the
    Alpha processor offers superior performance over a range of applications.

    Development of this system will draw on a history of collaboration between
    PSC and Compaq, and represents an extension of PSC's history of success at
    installing untried, new systems -- resolving the myriad of unanticipated
    hardware and software glitches that come up -- and turning them over
    rapidly to the scientific community as productive research tools.

    The PSC terascale system, to be located at the Westinghouse Energy Center,
    Monroeville, will be a component of NSF's Partnerships for Advanced
    Computational Infrastructure (PACI) program, supplementing other
    computational resources available to U. S. scientists and engineers.

    "The PSC has -- with its partners at Carnegie Mellon University, the
    University of Pittsburgh and Westinghouse -- an excellent record of
    installing innovative, high-performance systems and operating them to
    maximize research productivity," said NSF director Rita Colwell.

    "We're pleased that NSF's terascale initiative gives us this opportunity to
    use PSC's proven capability in high-performance computing, communications
    and informatics in support of the national research effort," said PSC
    scientific directors Michael Levine and Ralph Roskies in a joint statement.
    "Working in partnership with Compaq, we'll create a system that enables
    U.S. researchers to attack the most computationally challenging problems in
    engineering and science."

    "Compaq is looking forward to working with the National Science Foundation
    and the Pittsburgh Supercomputing Center and we are committed to the
    success of the terascale initiative," said Michael Capellas, Compaq's
    president and CEO. "With our AlphaServer systems and Tru64 UNIX, we are
    providing the technology infrastructure for some of the most advanced
    computing projects in the world. This is further proof of Compaq's
    leadership in high-performance computing and our commitment to help open
    new frontiers in science and technology."

    Development and implementation of the terascale system, including software
    and networking, will draw on fundamental research in computer science. A
    significant strength of PSC is its tri-partite affiliation with
    Westinghouse and with Carnegie Mellon University and the University of
    Pittsburgh and the pooled computing-related expertise of faculty and staff
    at both universities.

    "This award, which comes as the culmination of a national competition,
    recognizes PSC's leadership in high-performance computing and
    communications," said Jared L. Cohon, president of Carnegie Mellon. "And it
    provides another key building block for our region's technology future,
    enhancing our international stature in the development and application of
    advanced computing technology."

    "A gap exists between the computing resources available to the classified
    world and the open scientific community," said Mark Nordenberg, chancellor
    of the University of Pittsburgh. "It is ideal that PSC, a world leader in
    acquiring and deploying early the most powerful computers for science and
    engineering, can contribute to filling this gap. This award also
    demonstrates the unique scientific strengths that exist in Pittsburgh when
    its major research universities partner with each other and with leaders in
    industry."

    "Today's terascale award is one more in a long list of PSC's major
    achievements," said Charlie Pryor, president and CEO of Westinghouse
    Electric Company. "Westinghouse is proud of PSC's contribution to the
    nation's scientific community and is pleased to have been associated with
    PSC since its inception."

    Under the proposal, PSC will by the end of this year install an initial
    system with a peak performance of 0.4 teraflops. The six teraflop system,
    which will use faster Compaq Alpha microprocessors not yet available, will
    evolve from this system. The four-processor AlphaServers use
    high-bandwidth, low-latency interconnect technology developed by Compaq
    through a U.S. Department of Energy advanced technology program.

    The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon
    University and the University of Pittsburgh together with the Westinghouse
    Electric Company. It was established in 1986 and is supported by several
    federal agencies, the Commonwealth of Pennsylvania and private industry.

    # # #
    An artist's rendition of PSC's terascale system and examples of potential
    research applications are available at:
    http://www.psc.edu/publicinfo/tcs
  • Actually, I don't think Alpha's run RC5 very well due to its being coded to take advantage of some x86 hardware instructions that aren't in Alpha.

    MP3 encoding and SETI, well, that's a horse of a different colour. I suspect this cluster would rip the Library of Congress's CD collection in no time flat! :)

    FWIW, I work at API, the other source for Alpha's.
  • Because Compaq also has memory shring technology, not just filesystems. Search Compaq for a memory hub.
  • That's OK Compaq will sell you and Athlon too since we helped develope that chip to.....but it will still stand in the dark shadow of the Alpha....Nothing out runs an Alpha !!! Ohh yea and while I am at it. SUN may have you all fooled with thier "Rock Solid OS" ( Right ) That is why Compaq Tru64 Unix just got voted #1 UNIX and Cluster Unix. Mean while SUN has been throwing CPUS at you ~10%-20% faster than the ave PC on the market for the last 10 years and selling them to you at 10x the price....hat's off to SUN shame on the fools who paid for them. I feel the need for SPEED!!!!!
  • Can you imagine... a beowulf cluster of these?

    hrmmmmm...
    I don't know about you, but I wouldn't trust Compaq building /my/ supercomputer.

    Ever get the impression that your life would make a good sitcom?
    Ever follow this to its logical conclusion: that your life is a sitcom?
  • So that makes about 204 KW just for the computers (when working at full speed only, OK)!

    Compaq: "two 350W ATX power supplies for the whole tree should do it."

    Ever get the impression that your life would make a good sitcom?
    Ever follow this to its logical conclusion: that your life is a sitcom?
  • I wonder what Beowulf would think of all this.
    Prob'ly just get upset and go kill a few more monsters to unwind.
    Perhaps a Grendel of Grendels?

    Ever get the impression that your life would make a good sitcom?
    Ever follow this to its logical conclusion: that your life is a sitcom?
  • If you look at the Seti stats by platform, you'll find this line...

    alpha-compaq-T64Uv4.0d/EV67 87171 9.95 years 1 hr 00 min 00.4 sec

    Looks like the fastest Seti processor out there. I think i'm right in saying that this is at Compaq's research centres.
  • You can buy them at any reseller like Carotek, Great Lakes, Dexon, etc. So I don't know what the hell you are talking about, but if you can't get it at one, go to another. It's like whining that Microsoft won't sell you windows...so go to CompUSA or Best Buy or Office Depot and buy it.

    When I can go to CompUSA and buy a dual processor alpha with an academic discount, I'll short INTC to the limits of my margin.

  • Take a look at this [sun.com] - it's the tech specs for suns E10000 Starfire server. Not quite in the supercomputer leagues and yet it has a memory bandwidth of 102.4GBit/s and a latency less than 500nS.

  • It isn't the linux kernel hack that defines a Beowulf, so much as the programming library used to launch, manage, and link the various computing nodes (as well as the nodes being a bunch of independent, relatively "cheap" commodity PCs). The "kernel hack" (probably referring to bonded ethernet drivers) just helps facilitate faster communication, but could be done (in principle) on just about any unix (or even NT).

    In other words, Beowulf is not a Linux only term, and could also be done with NT stations (and has been). If they were using the same programming library for node communication, then it might even allow for a mixed NT, Linux system (in principle).
  • by acacia ( 101223 ) on Thursday August 03, 2000 @11:14PM (#880617)
    I keep hearing about these projects, and the means by which the nodes of these machines are connected, but what I really want to know is how these clusters are programmed. More to the point, how is it data and process parallelism implemented (or not) when you are talking about a high complexity environment and a fairly low level of abstraction.

    I write software for MPP & large scale SMP machines, but I use tools like Ab Initio or Torrent Orchestrate to abstract away much of the complexity for traffic control, checkpointing, hash partitioning data, etc... in my cursory examination of PVM and the MPI implementation, it seems pretty primitive, and the code must be a nightmare to implement properly, much less maintain.

    Is anyone working on a GNU componentized approach similar to the commercial packages I mentioned earlier to take care of this? Is anyone interested in doing this? This could be a pretty cool project.

    The other reservation I have when I look at the whole beowulf architecture is the node latency issue. Unless you have highly partitioned code, with independent processes, these machines are gigantic toasters, spending most of their lives waiting for IO. A well designed, partitioned app should be CPU bound. Most of the business apps I develop don't exhibit these (well partitioned) characteristics all the way through the process. It makes me wonder how effective these machines really are.

I have never seen anything fill up a vacuum so fast and still suck. -- Rob Pike, on X.

Working...