Compaq To Build DEC Beowulf Supercomputer 99
Tower writes: "Compaq Computer (Digital) and the Pittsburgh Supercomputing Center have won a $36 million contract to build a 2,728-processor supercomputer using 1.1 GHz EV68 processors in a 682 node Beowulf setup. Check it out here." This is a different machine than this one: That one was supposed to be used to calculate nuclear explosions, this one will be used by the National Science Foundation to work on biophysics, global climate change, astrophysics and materials science, according to the article.
Re:But why? (Score:3)
http://www.globalfilesystem.org [globalfilesystem.org]
Very cool technology. I have been following this for quite a while and it shows tremendous promise for solving all kinds of disk scalability problems.
Re:the power of linux (Score:2)
They've got a tonne of ultra10 and a few ultra60 machines and as I understand it they just start idle priority threads in the background of everyones machine.
However i'm sure they run down to play with the supercomputers on our campus when they get bored
MMmmm if you like big computers look at this [ed.ac.uk] but it looks far better in real life
Re:How many power does such a thing use? (Score:2)
dribble.... (Score:1)
:P
Asleep at the wheel (Score:1)
There have been three articles posted since I went to bed 11 hours ago, one two hours ago, the other 4 hours before that. And the one before that was four hours earlier.
Failing that, can we have a European slashdot or something?
Rich (just about to lose his +1 bonus I think)
Re:But why? (Score:1)
Re:More on PSC (Score:2)
This page [psc.edu] provides a description of the work researchers plan to do with the new supercomputer.
The center is a joint venture between Carnegie Mellon [slashdot.org], The University of Pittsurgh [pitt.edu], and the old Westinghouse Electric [westinghouse.com] company.
It's also intersting to note that the PSC & CMU formed the NCNE Gigapop [ncne.net] that provides the internet to CMU, PITT, WVU, and Penn State.
distributed computing (Score:1)
Off the top of my head (Score:1)
Re:But why? (Score:3)
A lot of these problems, like climate modelling can be worked on by partioning the problem into cells. You just need to fix up at the edges, on each iteration though. Independent systems but joined together, particularly with a low latency interconnect fit this sort of problem space well.
Obviously, there are some problems, where the dependencies between the data sets are nil, where commodity Intel/Athlon/Alpha Linux boxes are ideal. Still more where the are cost-efficient ;)
Supercomputing facilities are best equipped with a mixture of these. For some jobs a steamroller is better than a Porsche. When you've got a specific requirement, and lots of money is involved, off the shelf components are not always the best bet.
they surely lack the memory bandwidth that makes traditional mainframes and supercomputers so powerful.
Yes, but these aren't Beowulf clusters. Quadrix hardware is not some cheap and cheerful solution like switched Gigabit Ethernet ;)
Re:Not Beowulf/Linux (Score:1)
---------------
Re:Not Beowulf/Linux (Score:1)
Re:Compaq and Western (Score:1)
Re:Not Beowulf/Linux (Score:3)
I took it to be a "Beowulf clone" or a "Beowulf-style cluster". AFAIK (please correct me!), "Beowulf" refers specifically to a GPL'd Linux kernel hack, and thus any "Beowulf cluster" would be a Linux cluster. But I would assume it would be more or less straightforward to implement on Unices, at least for parties who have the source code, in which case I would call it a "Beowulf type cluster", or give it a new name altogether. But perhaps the term has been generalized; I think it has already generalized once from refering to "the" Beowulf cluster (the original one), to refering to all clusters built with the same kernel patch.
OTOH, there was a [epithet of your choice for a moron here] on the Beowulf mailing list for a while, who was adamant that his NT cluster was a "Beowulf" system. I never figured out why he even subscribed, since any exchange of information there would be completely irrelevant to his situation. Shows the importance of bragging rights in the IT world, I suppose.
--
Obligatory.... (Score:1)
Cyano
Definition of "Beowulf" (Score:1)
(Emphasis added.) The Compaq machine runs Tru64 UNIX. [tru64.org]
Re:quick but not that quick (Score:1)
Depends on the graphics card you order with it.
--
Re:It is not a Beowulf cluster (Score:2)
From the Beowulf FAQ [dnaco.net]: The more general term is NOW, Network of Workstations, which includes Beowulf, Beowulf-like systems on non-open OSes, and perhaps other types of cluster as well.
So strictly speaking, this is not a Beowulf. Of course the meaning of the term may be drifting, as with "hacker" and "cracker". (Languages do that.)
--
Does anyone know what the cheapest MIPS source is? (Score:2)
Would you get more MIPS/buck out of massive piles of $5 microcontrollers, or out of, say, K6 500 MHz chips with cheap MOBOs?
Again, just totally ignoring all other factors, no matter how silly you think that is.
Personally, I'd like to hijack a top-of-the-line fab and put grids of hundreds of little computers, each with a few K of memory, on dies that would normally be used for one microprocessor. I don't know what I'd do with them, but I'm sure I'd find some cool app like massive neural nets.
Ahhh... to set up a massive pile of millions of parallel processors that could start from "I think therefore I am" and get all the way up to deducing the existence of rice pudding and income tax before I hook up the data banks...
---
Despite rumors to the contrary, I am not a turnip.
Re:Not Beowulf/Linux (Score:1)
The master node sends a work message to the clients, they work on it and send the result back. Using the programs, the LAM/MPI message passing starts the program on all nodes (ie. rsh to each client and runs the program)
Other cluster types such as Mosix uses a kernel "hack" to send processes among nodes at the "kernel" level (not the correct terminology, but I dont know it well enough.) Also failover/high availability clusters are often used in server farms to take over when a server goes down or to keep up with the load
So in all, a Beowulf cluster is just one of many many types made for a specific task: number crunching. I could go into a lot more depth.. Heck, ive been paid to read up on clusters and try it out myself.. I simplified a lot of what i said.. Email me if you want more depth.. But either way, its still neat to say I built a Beowulf cluster..
-Daniel
Re:How many power does such a thing use? (Score:1)
And to think the labs at work kept tripping a 75 amp breaker.... Sheesh. BTW, the sounds of a few hundred computers all shutting off at once is neat... The shrills of the engineers that follows is even better..
-Daniel
How many power does such a thing use? (Score:3)
Let's try to estimate it: 682 systems each containing 4 processors. I guess that they will need a 300 W power supply. So that makes about 204 KW just for the computers (when working at full speed only, OK)!
At 110 V this thing would eat 1860 Ampere, not something you'd like to try at home or something (imagine the electricity bill
Re:Goodly, /. at 0500... (Score:1)
Re:It would be even better... (Score:1)
Re:Will this really be supercomputer? (Score:1)
Unless they are data & process partitioned/independent processes.
>So pretty much everything depends on the "switches" they'll use to connect the nodes
This still does not guarantee good performance. An R/S6000SP has a fantastic interconnect, but can still run like a dog if there are too many processes dependent upon each other.
Not that what you are saying is wrong. You obviously know something about the subject (I've been reading your posts):-) but a good app hopefully does not have too much communication between nodes, or serialized data streams.
Beowulf....... (Score:1)
Re:Will this really be supercomputer? (Score:2)
Ok, that one is faster... (>770 MB/s internode using MPI, no mentioning of latency). But it doesn't qualify as a beowolf-style machine; it is all specialized Hitachi stuff.
This compaq is in my opinion 'beowolf-style': it uses standard 8-way SMP machines using PCI network cards and fast switches for interconnection. For this, the QSW products still look impressive to me.
Re:How many power does such a thing use? (Score:2)
Re:the power of linux (Score:1)
Ok, who was... (Score:1)
--- Never hold a dustbuster and a cat at the same time ---
Re:But why? (Score:1)
They're HBA list is not up to date, or they are unaware of the JNI [jni.com] adaptors.
These guys also have the first 2 GigaBit FC HBA
Re:How many power does such a thing use? (Score:1)
Re:Purpose? (Score:1)
Test Drive a Beowulf (Score:5)
Yes, I work for Compaq. No, I don't speak for them.
Re:Purpose? (Score:1)
Including Storm Prediction [psc.edu], Protein Folding, Turbulence Studies, Earthquake Preparedness, AIDS Research, Cardiac Fluid Modeling, Oceanic Phenomena, Electromagnetics and Fluid Dynamics.
They've also got some pretty neat animations of some of all of the above.
Re:But why? (Score:3)
If you want massively parallel systems then I would honestly think that something like processtree would be a good solution since you can rent a phenomenal block of cpu time.
Well, obviously these machines are something inbetween the extremes you mention, and there are applications for which this is sort of a sweet-spot.
I have used an application for which this type of machines are excellent: molecular dynamics simulations.
The usual strategy for this type of software is to partion your system by giving every proc a share of the atoms. Then you start calculating forces and motions etc for each part for a short time period, and then compare them. Many forces extend to neighbouring parts, and atoms can move to other parts, so quite a lot of communications between the nodes is necessary. After exchanging this info, each node can compute the next timestep. This works quite well if most interactions between atoms are relatively short.
This type of app is excellently suited for a large cluster. It is naturally suited for message-passing, so programming it using MPI is easy. If you partion the system well, the memory use of one node is quite small, and fits for a large part in cache. IO between nodes has to happen quite often, so latency is a problem. So processtree is obviously no option.
These simulations scale quite well to larger molecular systems. Unfortunately, many researchers don't want more atoms in their systems, they want the simulation of their small system done faster. Unfortunately, this scaling is bad; if you end up with only a few atoms per node, the communication overhead boggs it down.
FYI, here [chem.rug.nl] are some old benchmarks of the software i used (gromacs). Although this software is considered to scale excellent, a 64 node machine is only 32 times as fast as a single-node machine...
Sorry if all this is incrompehensible, i guess i want to say too much too fast...
Beowulf Cluster (Score:1)
--
Re:distributed computing (Score:1)
What it's really used for (Score:2)
Yes, exactly:
Compaq quality (Score:1)
Rich
Re:Can you imagine... (Score:1)
oh yeah (Score:2)
Re:Programming this Beast (Score:1)
People have pointed out that the network latency and bandwidth are often the limiters in this kind of setup. I would like to point out the next bottle neck to scaled speed-up would be memory bandwidth and cache-reuse. Fetching from main memory (RAM) costs on the order of 300 clock cycles, while using stuff in cache only takes 1.
quick but not that quick (Score:1)
Will this really be supercomputer? (Score:2)
Re:But why? (Score:1)
But I bet it has a great linpack score. Benchmarks don't lie.
-jlg
ps. use Debian! www.debian.org
Not Beowulf/Linux (Score:4)
The writer did mention Beowulf, but only to say that it was similar.
__
Conclusions are easy to jump to. Just be prepared to jump again...
Monte Carlo techniques can use this (Score:1)
These are commonly used in particle transport routines where directions, interactions, and birthplace information can be simulated by generating a random number and then comparing that number to a known statistical behavior. It's powerful and suprisingly easy thing to program. It can be slow, but brute force computing is making all sorts of problems practical. MCNP is a very mature code that uses this.
They can also be used to solve multidimensional integrals. Here they rule, and time savings over other methods are very good.
!! (Score:1)
Re:Will this really be supercomputer? (Score:3)
Which has an excellent product page here [quadrics.com]. 2.35 usec latency for a short message. 340 MB/s peak, 210 MB/s sustained throughput. Fault tolerant redundant links. Tru64, Solaris and Linux support. I know nothing about this, but it sounds impressive to me.
But why? (Score:3)
If you want massively parallel systems then I would honestly think that something like processtree would be a good solution since you can rent a phenomenal block of cpu time.
Each of these 682 nodes will be running Compaq's Tru64 Unix, which is capable of sharing a single file system
Wow if only home computers could share disks like that!!! This actually makes me think that the nodes are operating as independant computers rather than part of a whole... but hey i'm probably wrong
Re:Programming this Beast (Score:2)
Alastair McKinstry,
AlphaServer SC Engineering (who make these machines)
Compaq.
What node interconnect will then use? (Score:1)
GSN can crank out 6.4 Gbit/s! It makes Gigabit Ethernet look like a turtle.
GSN webpage [hnf.org]
This is not a beowulf cluster (Score:3)
Alastair McKinstry
AlphaServer SC Engineering, Compaq.
Re:It would be even better... (Score:1)
Re:Memory bandwidth. (Score:1)
I just can't resist (Score:1)
Big computers build even bigger bombs
Bombs blow up computers
Yes I know I'm lame. -1 Redundant me or something.
Re:Memory bandwidth. (Score:1)
Re:the power of linux (Score:1)
It doesn't run linux. It doesn't use beowolf. It's it's own completely different beast, yet somehow you've managed a way to connect it and Linux in this discussion and somehow pat Linux on the back for Compaq's engineering feats?
Get over it... Linux isn't anywhere on the map of this discussion. Tru64 Unix is, though, but how much do they have in common besides being differnt branches off the Unix family tree?
Re:Will this really be supercomputer? (Score:1)
In that case you don't need a supercomputer and a Beowulf cluster built of off-the-shelf PCs really is a good solution.
This still does not guarantee good performance. An R/S6000SP has a fantastic interconnect, but can still run like a dog if there are too many processes dependent upon each other.
True, but from what I've gathered (no personal experience, admittedly), volume and latency of interprocess communication is much more common as a bottleneck than actual unresolvable dependencies.
but a good app hopefully does not have too much communication between nodes, or serialized data streams.
I bet that a lot of researchers whish they could choose to only work with "good apps" :)
But certain classes of important problems (mostly simulations of chemical of physical processes) tend to be "bad"...
Purpose? (Score:1)
I can see (possibly) weather prediction, but what else? It sounds like they had a chunk of money left over and thought that a supercomputer would be cool to own.
Of course, I'm all for wasting money in the name of science.
--
Re:Will this really be supercomputer? (Score:2)
Obviously you can't just throw 500 PCs running the Beowulf kernel and call it a supercomputer. You do need dedicated high speed networking, and clearly not all jobs parallelize readily to that model. For a special purpose computer however, Beowulf is quite acceptable in the case of relatively minimal communication between the processes compared to the communication needed in say a vector processor based supercomputer.
One company I've heard of that builds these Beowulf clusters is http://www.hpti.com/ From what I've heard, they do use some heavy duty connections between the nodes.
Re:Will this really be supercomputer? (Score:1)
The problems themselves have 'Amdahl factors' (or 'coefficients', horrible word), which describes how they behave when partitioned to be run multi-processor (multi-computer too).
An example of the factors which affect this is this: (please don't pick holes, it's simplified
Let us say one machine can efficiently handle a working set of 1000 chunks of data.
If the data is tabular, then its interface with its four neighbours is 4*32 = 128 units. So an eighth of the data needs to be exchanged with neighbours each iteration.
If the data is 3 dimensional, then its interface with its neighbours is 6 * 100. So 60% of the data will need to be exchanged. (I'm assuming each machine takes a cube of data).
If the machine can handle 1000000 chunks in its working set, then the 2D problem has interfaces requiring 4000 units to be exchanged, and the 3D problem requires 60000.
So we see indeed that 3D models (fluid flow etc) do have greater inter-processor communication requirement than 2D models do.
We also see that if you can partition the problem into larger jobs, the communication becomes less of an overhead.
That is probably why the focus is on very high end processors in these arrays. 3 times as many Xeons for the same price probably wouldn't be worth it...
FatPhil
Could you Imagine the concept of Time Zones? (Score:1)
Right tool for the job (Score:1)
Originality (Score:1)
Re:aw, dammit... (Score:1)
Re:But why? (Score:2)
On the other extreme are widely distributed systems, like SETI@home. There, you get decent performance by splitting your data set into completely independent, smaller batches, and farming them out a chunk at a time to smaller systems, which then report back to a central aggregator. Efficiency-wise, this method loses a lot, because every client usually has to completely duplicate the functionality and overhead of the entire processing application; however, the low cost and high availability of processing cycles can make that almost immaterial.
These big clusters fit somewhere in the middle. Especially in this case, where each node is a highly capable system in its own right, you can give more complex and varied instruction sets to each unit, and offer decent I/O bandwidth to the others. The clustering tools give the system a maintainability and level of transparency better that simply running the same application on many machines, and since each node can be given a completely seperate set of instructions, the redundancy of code is less.
The other key advantage to one big cluster, instead of a processtree-like distributed solution, is the real-time control and reliability of the processing. If you have massive job that needs to be completed some time in the next month, an online distributed net might work fine; if you have a data set crunched by five p.m., you want to know exactly how much power you'll be getting, and run the job in one fell swoop.
Re:Can you imagine... (Score:2)
1-900-2,728-processor (Score:1)
======================================
Re:Programming this Beast (Score:1)
Since most users are programming scientific codes, and those codes are often dependent on highly tuned parallelized matrix libraries, the answer to whether commercial parallelization packages will be used is probably "not at all". What's more, large systems like this encourage large- scale parallelism (30-100 processes), something that auto-parallelizers accomplish poorly.
And yes, MPI and PVM are certainly primitive. But there's so little commonality among scientific codes that, aside from the creation of standard scientific libraries, we're unlikely to see parallel programming meta-tools emerge into the mainstream. God knows, it's hard enough to get access to a decent parallel debugger on most parallel machines.
The problem is the market's simply too small for any tool builders to make a decent living from selling parallel tools. Perhaps a half-dozen firms are trying to do this world-wide, and none are selling their wares into a large fraction of the supercomputing shops. As a rule, the millions go into hardware, not software. (Looks more impressive when you have something tangible to show off to VIP visitors.)
Your point on the difficulty of programming machines like these is very well taken. It's a VERY BIG PROBLEM that has effectively been entirely ignored by the NSF. The irritating thing is that the machine can be 2-10 times more effective when the code is tuned properly for the architecture. And development time can be reduced by a factor of perhaps 10X when proper tools (especially shared memory) is available. Alas, good tuning tools (and the knowledge to use them) and large shared memory architectures are rapidly approaching extinction.
Whatever. Grad students and postdocs are poorly paid for a reason. Today's economics dictate that SOMEBODY has to waste great chunks of time in dealing with a poor parallel programming interface. It might as well be they.
As far as architectural inadequacies such as poor latency or awkward topologies -- fagettaboutit. The supercomputing market is too small to influence archetectural considerations as it did in the past. Clusters of SMPs is here to stay, probably until molecular computing or some other revolutionary technology supplants it. In the end, it's not performance that drives this market, but the vendors' bottom line. The misfortunes of Cray/SGI/TMC/FPS/Convex/KSR/etc/etc are testament to that.
Re:This is not a beowulf cluster (Score:1)
Awful website (Score:2)
Don't try to do real web site development with Mozilla and manual hacking. Get Dreamweaver.
Re:Will this really be supercomputer? (Score:1)
Re:Programming this Beast (Score:1)
If you look under "Highly Available Applications", it talks about programming for it.
Re:What node interconnect will then use? (Score:1)
Goodly, /. at 0500... (Score:1)
-M5B
THIS IS JUST THE BEGINING!!! HOLD ON TO YOUR SEATS (Score:1)
Re:Not Beowulf/Linux (Score:2)
I dunno -- seems to me like the author is saying that it really is a Beowulf cluster.
-Waldo
-------------------
Brute force computing vs elegant solutions (Score:1)
So without going offtopic, are there any proven conspiracies to make poorly coded (slow) programs just to make you go buy a faster computer? Maybe Compaq is causing cancer to sell the computers that will help cure it! (Maybe I'm an idiot! *smack*)
Compaq won't sell me an alpha (Score:1)
It is so hard to get a quote out of a Compaq or a reseller that it is as if they don't want to sell anything. I have been promised for Friday (or maybe Monday) a quote I asked for two weeks ago.
DEC was the same way of course, but I guess this explains why CPQ hasn't moved in 2.5 years, while SUNW has doubled three times in that period.
In another stroke of marketing brilliance, the alpha configurator only runs under Windows. Of course it is such a useless piece of crap that once I finally got it running it was of no help.
And as long as I am ranting about Compaq, wasn't the .18micron 1+GHz chip supposed to be here already? The machine I am being quoted on is the same xp1000a I could buy this time last year (well nearly).
The alpha processor is absolutely the fastest way to get done the computing jobs I need to get done, but as soon as SAS Software for Linux is available (and the beta is in the mail to me now) and the 2.4 kernel with its proper NFS and LVM features is available, I am ditching Compaq and going with dual Athlons (which had better be out by then).
Re:How many power does such a thing use? (Score:1)
Re:Programming this Beast (Score:3)
More on PSC (Score:3)
I was involved with the pittsburgh supercomputing center [psc.edu] in high school. We were given a grant for processing time, something like $40,000, to compute the heat loss of my community due to improper insulation. Admittedly, I was on the fray of the group but I know they have been using massively parallel systems for a while. They also had an Internet connection which is where I first used Lynx.
At that time they had a T3D and a "DEC supercluster" which was IIRC 256 Digital Alpha computers. They had some other supercomputers but I can't remember what they were. The supercluster was later upgraded to 512 processors. It seems that this is the same thing, updated and built by Compaq (who bought Digital).
Re:Memory bandwidth. (Score:1)
And pointing out that large traditional computers have a higher bandwidth than networked clusters.
Whether this affects performance depends very much on the application, since in things like thermodynamic modelling it does, and rc5 cracking it doesn't.
Re:the power of linux (Score:1)
Was it that they couldn't afford the staff to run such a beast... no
Was it that they couldn't find applications to run on it
It was because it took about a megawatt of power to run and the decided to put it right in the middle of a great big building so they couldn't get the heat out easily. All the airconditioning round there caused it to rain in the non-airconned corridors
Compaq and Western (Score:2)
Now the *exciting news* is that they are teaming together with upto three other university's and build a "Beowulf of Beowulf's" (think 4 of these babys [beowulf.uwo.ca] Connected together through *very* fast network connections, so you can submit your job and "it" would decide if there's too much going on at Western it can queue part of your job up at another university. Thus creating a beowulf of beowulf's
Baldric [baldric.uwo.ca] the student run beowulf is also (read hopefully) going to be a part of this with our donation of 50 some nodes (just off the truck) from Sprint Canada. (ok that was a blatent plug
Carnegie Mellon's press release... (Score:2)
(The "8 1/2 x 11 News" is published each week by the Department of Public Relations. The newsletter is available on the official.cmu-news and cmu.misc.news bulletin boards.)
NSF Awards $45 Million to Supercomputing Center for "Terascale" Computing
The Pittsburgh Supercomputing Center (PSC) has been awarded
$45 million from the National Science Foundation to provide "terascale"
computing capability for U.S. researchers in all science and engineering
disciplines. Through this award, PSC will collaborate with Compaq Computer
Corporation to create a new, extremely powerful system for the use of
scientists and engineers nationwide.
Terascale refers to computational power beyond a "teraflop" -- a trillion
calculations per second. While several terascale systems have been
developed for classified research at national laboratories, the PSC system
will be the most powerful to date designed as an open resource for
scientists attacking a wide range of problems. In this respect, it fills a
gap in U.S. research capability -- highlighted in a 1999 report to
President Clinton -- and will facilitate progress in many areas of
significant social impact, such as the structure and dynamics of proteins
useful in drug design, storm-scale weather forecasting, earthquake
modeling, and modeling of global climate change.
The three-year award, effective Oct. 1, is based on PSC's proposal to
provide a system, installed and available for use in 2001, with peak
performance exceeding six teraflops. To achieve this, PSC and Compaq
proposed a system architecture, based on existing or soon to be available
components, optimized to the computational requirements posed by a wide
range of research applications and which, at this level of performance,
pushes beyond simple evolution of existing technology.
The brain of the proposed six teraflop system will be an interconnected
network of Compaq AlphaServers, 682 of them, each of which itself contains
four Compaq Alpha microprocessors. Existing terascale systems rely on other
processors, but extensive testing by PSC and others indicates that the
Alpha processor offers superior performance over a range of applications.
Development of this system will draw on a history of collaboration between
PSC and Compaq, and represents an extension of PSC's history of success at
installing untried, new systems -- resolving the myriad of unanticipated
hardware and software glitches that come up -- and turning them over
rapidly to the scientific community as productive research tools.
The PSC terascale system, to be located at the Westinghouse Energy Center,
Monroeville, will be a component of NSF's Partnerships for Advanced
Computational Infrastructure (PACI) program, supplementing other
computational resources available to U. S. scientists and engineers.
"The PSC has -- with its partners at Carnegie Mellon University, the
University of Pittsburgh and Westinghouse -- an excellent record of
installing innovative, high-performance systems and operating them to
maximize research productivity," said NSF director Rita Colwell.
"We're pleased that NSF's terascale initiative gives us this opportunity to
use PSC's proven capability in high-performance computing, communications
and informatics in support of the national research effort," said PSC
scientific directors Michael Levine and Ralph Roskies in a joint statement.
"Working in partnership with Compaq, we'll create a system that enables
U.S. researchers to attack the most computationally challenging problems in
engineering and science."
"Compaq is looking forward to working with the National Science Foundation
and the Pittsburgh Supercomputing Center and we are committed to the
success of the terascale initiative," said Michael Capellas, Compaq's
president and CEO. "With our AlphaServer systems and Tru64 UNIX, we are
providing the technology infrastructure for some of the most advanced
computing projects in the world. This is further proof of Compaq's
leadership in high-performance computing and our commitment to help open
new frontiers in science and technology."
Development and implementation of the terascale system, including software
and networking, will draw on fundamental research in computer science. A
significant strength of PSC is its tri-partite affiliation with
Westinghouse and with Carnegie Mellon University and the University of
Pittsburgh and the pooled computing-related expertise of faculty and staff
at both universities.
"This award, which comes as the culmination of a national competition,
recognizes PSC's leadership in high-performance computing and
communications," said Jared L. Cohon, president of Carnegie Mellon. "And it
provides another key building block for our region's technology future,
enhancing our international stature in the development and application of
advanced computing technology."
"A gap exists between the computing resources available to the classified
world and the open scientific community," said Mark Nordenberg, chancellor
of the University of Pittsburgh. "It is ideal that PSC, a world leader in
acquiring and deploying early the most powerful computers for science and
engineering, can contribute to filling this gap. This award also
demonstrates the unique scientific strengths that exist in Pittsburgh when
its major research universities partner with each other and with leaders in
industry."
"Today's terascale award is one more in a long list of PSC's major
achievements," said Charlie Pryor, president and CEO of Westinghouse
Electric Company. "Westinghouse is proud of PSC's contribution to the
nation's scientific community and is pleased to have been associated with
PSC since its inception."
Under the proposal, PSC will by the end of this year install an initial
system with a peak performance of 0.4 teraflops. The six teraflop system,
which will use faster Compaq Alpha microprocessors not yet available, will
evolve from this system. The four-processor AlphaServers use
high-bandwidth, low-latency interconnect technology developed by Compaq
through a U.S. Department of Energy advanced technology program.
The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon
University and the University of Pittsburgh together with the Westinghouse
Electric Company. It was established in 1986 and is supported by several
federal agencies, the Commonwealth of Pennsylvania and private industry.
# # #
An artist's rendition of PSC's terascale system and examples of potential
research applications are available at:
http://www.psc.edu/publicinfo/tcs
Re:dribble.... (Score:1)
MP3 encoding and SETI, well, that's a horse of a different colour. I suspect this cluster would rip the Library of Congress's CD collection in no time flat!
FWIW, I work at API, the other source for Alpha's.
Re:But why? (Score:2)
Re:Compaq won't sell me an alpha (Score:1)
standard troll (Score:2)
hrmmmmm...
I don't know about you, but I wouldn't trust Compaq building
Ever get the impression that your life would make a good sitcom?
Ever follow this to its logical conclusion: that your life is a sitcom?
Re:How many power does such a thing use? (Score:1)
Compaq: "two 350W ATX power supplies for the whole tree should do it."
Ever get the impression that your life would make a good sitcom?
Ever follow this to its logical conclusion: that your life is a sitcom?
Beowulf of beowulfs? (Score:1)
Prob'ly just get upset and go kill a few more monsters to unwind.
Perhaps a Grendel of Grendels?
Ever get the impression that your life would make a good sitcom?
Ever follow this to its logical conclusion: that your life is a sitcom?
Re:oh yeah (Score:2)
alpha-compaq-T64Uv4.0d/EV67 87171 9.95 years 1 hr 00 min 00.4 sec
Looks like the fastest Seti processor out there. I think i'm right in saying that this is at Compaq's research centres.
Re:Compaq won't sell me an alpha (Score:1)
When I can go to CompUSA and buy a dual processor alpha with an academic discount, I'll short INTC to the limits of my margin.
Re:Memory bandwidth. (Score:2)
Re:Not Beowulf/Linux (Score:2)
In other words, Beowulf is not a Linux only term, and could also be done with NT stations (and has been). If they were using the same programming library for node communication, then it might even allow for a mixed NT, Linux system (in principle).
Programming this Beast (Score:3)
I write software for MPP & large scale SMP machines, but I use tools like Ab Initio or Torrent Orchestrate to abstract away much of the complexity for traffic control, checkpointing, hash partitioning data, etc... in my cursory examination of PVM and the MPI implementation, it seems pretty primitive, and the code must be a nightmare to implement properly, much less maintain.
Is anyone working on a GNU componentized approach similar to the commercial packages I mentioned earlier to take care of this? Is anyone interested in doing this? This could be a pretty cool project.
The other reservation I have when I look at the whole beowulf architecture is the node latency issue. Unless you have highly partitioned code, with independent processes, these machines are gigantic toasters, spending most of their lives waiting for IO. A well designed, partitioned app should be CPU bound. Most of the business apps I develop don't exhibit these (well partitioned) characteristics all the way through the process. It makes me wonder how effective these machines really are.