You've built a large cluster of machines on a relatively pea-sized budget.
Are other government agencies going to duplicate your work? Have they already? If so, for what purposes?
There are a lot of government agencies building large clusters, such as the Department of Energy's Sandia National Lab, which has the 800+ processor CPlant cluster today, with another 1,400 processors on the way. Like FSL, they use their cluster for scientific computing. The well-known Beowulf clusters started within NASA, another U.S. government agency.
However, the Forecast Systems Lab (FSL) system is a bit different from these other clusters: it's intended to be a production-quality "turn key" supercomputer, and it contains all the things supercomputer users are used to, such as a huge robotic tape storage unit (70 terabytes of tapes), and a fast disk subsystem (a bandwidth of 200 megabytes/second.) The FSL system is also much more reliable than your average cluster -- in its first three months of operation, it was up 99.9% of the time. During that time we had quite a few hardware failures (due to a power supply problems), but no work was lost, because of our fault-tolerant software.
Beowulf in General
How do you think the new wave of Beowulf clusters will affect all of supercomputing, not just forecasting?
The kinds of problems that scientists solve have different computational needs. In the mid 1970's, the most cost effective machine to use for just about any problem was a Cray supercomputer. These days, desktop PCs far are cheaper per operation than the "big iron", so that's why this interest in clusters has sprung up. The availability of production-quality commodity clusters like the FSL machine is a new development in the field.
IBM already sells IBM's idea of a commodity cluster; it uses IBM's RS/6000 business servers as building blocks. I think commodity clusters can deliver far more bang for the buck as an IBM SP supercomputer, but then again I am a cluster evangelist.
In the beginning...
How did you come to be the project's chief designer? I'm curious to know the background of anyone who gets to work on such an interesting project.
Well, let's see: I'm a dropout from an Astronomy PhD, and for fun I dress up in funny clothes (I'm the one in yellow) and play the hurdy gurdy. I've only taken one computer science class since I started college. I assure you that you're never going to meet anyone much like me in this field.
Seriously, I've worked in scientific computing for quite a while, and I've had a chance to work with a lot of people and learn from them. I was also helped quite a bit learning about distributed systems while working on IRC and later, the Legion distributed operating system. The art of designing a system like this is understanding the customer's needs, understanding what solutions are possible, and understanding what can actually be delivered, be made reliable, and hit the budget.
In addition, it's worth pointing out what this sort of project involves. Most of the interesting development parts are done by other people. Compaq designed the Alpha processor, and they and legions of Linux hackers provided Linux on the Alpha. Compaq supplied their extremely good compilers (FSL mostly uses Fortran.) Myricom supplied the interconnect and an MPI message-passing library which was optimized for their interconnect. HPTi provided the software glue that turned all this into a complete, fault-tolerant system. Without all these great building blocks, we would never have been able to produce this system.
The Future of the Control Software
I built a Beowulf-style cluster this past semester in college for independent study. One of the biggest hurdles we had was picking out a message passing interface such as MPI or PVM. Configuring across multiple platforms was then even worse (we had a mixture of old Intels, SunSparcs and IBM RS/6000's). What do you see in the future for these interfaces in terms of setup and usage and will cross-platform clusters become easier to install and configure in the future?
We provided an easy-to-use set of administrator tools so that the Forecast Systems Lab (FSL) cluster can be administered as if it were a single computer. This is a fairly difficult to do if you have a big mix of equipment, but the FSL system will never become that complex. There's already been a lot of development of programs for administering large clusters of machines; they just tend to not get used by other people. I'll admit that I'm part of that problem; I took some nice ideas from other people's tools, added some of my own, and re-invented the wheel slightly differently from everyone else.
Before deciding on a Beowulf clusters, what different options did you explore (Cray? IBM?), and what motivated you to choose the Beowulf System?
Additionally, to what would you compare the system that you are planning to build, as far as computing power is concerned?
The company I work for, HPTi, is actually a systems integrator, so we didn't decide to go out and build our own solution until we had checked out the competition and thought they didn't have the right answer. For the computational core of the system, Alpha and Myrinet were much more cost effective than the Cray SV-1, the IBM SP, and the SGI O2000. A more cost-effective machine gives the customer more bang for their buck.
I'd compare the system that we built to the IBM SP or the Cray T3E, as far as computing power is concerned. Both are mostly programmed using the same MPI programming model that FSL uses, which is the main programming model that we support on our clusters.
Biggest whack in the head?
Having built a few small ones, I got to know quite a bit about Linux clusters, and about programming for them. Therefore, this question has nothing to with clusters.
What was the biggest 'WTF was I thinking' on this project? I'd imagine there was a fair amount of lateral space allowed to the designers, and freedom to design also means freedom to screw up.
We actually didn't make that many mistakes in the design. We had some wrong guesses about when certain technology was going to be delivered -- the CentraVision filesystem (more about that below) for Linux arrived late, and we had to work with Myrinet to shake out some bugs in their new interconnect hardware and software. Our biggest problem with our stuff was actually getting the ethernet/ATM switches from Fore Systems to talk to each other!
... a beowulf of these babies - oh wait! :-)
Seriously, what was the most challenging of maintenance tasks you had to undertake? Do you anticipate that a trade off point where the number of machines makes maintenance impossible? Do you have any pearls of wisdom for those of us just involved in the initial design of such clusters, so that maintaining it in the future is less painful?
Hardware maintenance of the FSL machine actually isn't hard at all. If a computational node fails, we have a fault tolerance daemon which removes the failed node from the system and restarts the parallel job that was using that node. The physical maintenance of a few hundred machines actually isn't so bad; these Alphas came with three-year on-site service from Compaq. (Hi, Steve!)
More interesting than hardware maintenance is software maintenance. You can imagine how awful it would be to install and upgrade 276 machines one by one. Instead, we have an automated system that allows the system admin to simultaneously administer all the machines. We suspect that these tools could scale to thousands of nodes; after all, they're just parallel programs, like the weather applications that the machine runs.
Question about maintenance.
A major problem with using a beowulf cluster over a single supercomputer is that you now have to administer many computers instead of just one. Additionally, if something is failing/misbehaving/etc., you have to determine which part of the cluster is doing it. I'm interested a] how much of a problem this is over a traditional single machine supercomputer, b] why you chose the beowulf over a single machine considering this factor, and c] how you'll keep this problem to a minimum.
Besides that, best of luck, and I can't wait to see the final product. ;^)
You haven't described a problem, you've described a feature.
We've provided software that allows administration of the cluster as if it was one machine, not many. This software also allows FSL to test new software on a portion of the machine, instead of taking the whole thing down. The software on the machine can also be upgraded while the machine is running, instead of requiring downtime.
Since the hardware is fairly simple, it's actually quite easy to find a misbehaving piece of hardware. And in this kind of system, a hardware failure only takes out a small portion of the machine.
For example, on an SGI O2000 or similar large shared-memory computer, a single CPU or RAM chip failure takes out the entire machine. The interconnect on an O2000 is not self-healing like the interconnect we used, Myrinet. These features make a cluster more reliable than a "single machine".
Why did you choose Alpha processors for the individual nodes? Why not something cheaper with more nodes, or something more expensive with fewer nodes? What other configurations did you consider, and why weren't they as good?
We did a lot of benchmarking before settling on Alphas for this particular system -- in general we're processor agnostic, happily using whatever gives the highest performance for each customer. We could have bought more nodes if we had gone with Intel or AMD, but the total performance would have been much lower for this customer.
The Future of Scientific Programming?
by Matt Gleeson
The raw performance of the hardware being used for scientific and parallel programming has improved by leaps and bounds in the past 10-20 years. However, most folks still program these supercomputers much the same way they did in the 80's: Unix, Fortran, explicit message passing, etc.
You have worked in research with Legion and in industry at HPTi. Do you think there is hope for some radical new programming technology that makes clusters easier for scientists to use?
If so, what do you think the cluster programming environment of tomorrow might look like?
Actually, in the end of the 1980's, Unix was new in the supercomputing scene, and most sites still used vector machines. It's only in the 1990s that microprocessors and MPI message-passing have become big winners. And that's because of price-performance, not because it's easier to use than automatic vectorizing compilers. Ease of use for supercomputers reached its peak around 1989.
I do think there's hope of new approaches, however. One great example is the SMS software system developed at FSL. This software system is devoted to make it easy to write weather-forecasting style codes, and involves adding just a few extra lines of source code to parallelize a previously serial program. The result can sometimes efficiently scale to hundreds of processors, still can run on only one processor, and FSL has enough experience with non-parallel-programming users to know that they can change working programs and end up with a working program. (If you've ever heard of HPF, then this is somewhat like HPF, except it actually works.)
Today, the best programming environments are ones that hide message-passing, either in specialized library routines or using a preprocessor approach like SMS. By the way, Legion allows you to program distributed objects with minimal source code changes. I expect more of the same thing in the future.
My crystal ball isn't good enough to tell me what the next revolutionary change will be. I'm actually pretty happy with the evolutionary changes I've seen recently.
One of the weaknesses for beowulfs seems to me to be a lack of decent (job) management software. How do you split the clusters resources? Do you run one large simulation on all the CPUs, or do you run 2 or 3 jobs on 1/2 or 1/3 of the available CPUs?
Is there provision for shifting jobs onto different nodes if one of them dies during a run?
We use the PBS batch system to manage jobs; it handles splitting the cluster resources among the jobs. At FSL, there are typically 10+ jobs running at the same time; the average job uses around 16 out of the 264 compute nodes.
If a compute node dies during a run, a HPTi-written reliability daemon marks the dead node as "off-line" and restarts the job. The user never knows there was a failure.
Weather forecasting in general.
Ok, a two parter:
As I understood it weather models are a fairly hard thing to paralleliz (how the hell do you spell that?) because of the interdependence of pieces of the model. This would seem to me to make a Beowulf cluster a tough choice as it's inter-CPU bandwidth is pretty low right? And that's why I thought most weather prediction places chose high end super-computers because of their custom and expensive inter-CPU I/O?
Weather models are moderately hard to parallelize; in order to process the weather in a given location, you need to know about the weather to the north, south, east, and west. For large numbers of processors, this does require more bandwidth than fast ethernet provides, and that's why we used the Myrinet interconnect, which provides gigabit bandwidth, and which scales to thousands of nodes with high bisection bandwidth, unlike gigabit ethernet.
As far as disk I/O goes, yes, most clusters are fairly weak at disk I/O compared to traditional supercomputers from Cray. We are using the CentraVision filesystem from ADIC along with fibre channel RAID controllers and disks. This is more expensive than normal SCSI or IDE disks, but provides much, much greater bandwidth for our shared filesystem.
Second part: Is weather prediction getting any better? Everything I've read about dynamic systems says that prediction past a certain level of detail or timeframe is impossible. Is that true?
The quality of a weather prediction depends on a lot of things: the quality of the input data, which has gotten a lot better with the new satellites and other data collection systems recently deployed; the speed of the computer used to run the prediction; the quality of the physics algorithms used in the program, which have to get better and better as the resolution gets finer and finer; and the expertise of the human forecaster who interprets what comes out of the machine. All of these areas have limits, and that's why forecasts have limits.
What about a dnet type client?
I am curious as to whether (no pun intended...:)) or not you have ever done any testing to see if a distributed.net type environment would be useful for your type of work?
It seems to me that there are more than a few people who are willing to donate spare cpu cycles for various projects. At a minimum. you could concentrate on the client side binaries and not worry as mouch about hardware issues.
Most supercomputers, like the FSL system, are in use 100% of the time doing real work. The biggest provider of cycles to distributed.net are desktop machines, which aren't used most of the time. Running distributed.net type problems on the FSL cluster is a bit of a waste, since the FSL cluster has a lot more bandwidth than distributed.net needs.
In closing, I'd like to thank Slashdot for interviewing me, and I'd like to point out that I got first post on my own interview -- perhaps the only time that this will ever happen in the history of the Universe?