Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM

IBM Constructs New Fastest Computer 188

scoobysnack writes "MSNBC is reporting that IBM has once again created the world's fastest computer -- it will be used for simulating real-world nuclear tests. With 12 teraflops it would still take it 3 months to simulate the first 1/100th of a second of a nuclear bomb explosion." There's coverage at CNET as well.
This discussion has been archived. No new comments can be posted.

IBM Constructs New Fastest Computer

Comments Filter:
  • Simulation, to be even remotely believable needs a hell of a lot more than pure processing power.

    So, the answer is no.
  • IBM make some of the worlds fastest computers - everyone knows that and this latest article seems to suggest that they haven't lost their touch. I'm a bit concerned that they may have bitten off more than they can chew with attempting to simulate nuclear reactions.

    I don't understand why people bother simulating nuclear reactions. Now, before you think i'm being facetious, let me explain. Nuclear physics is hard (as if you needed me to tell you that). Most of the theories as to how the fundamental interactions work are flawed. For example, the liquid drop [yale.edu] type models (on which most current simulations are based) are incredibly simplistic. They don't even take into account the Pauli exclusion principle, instead relying on a fudge factor to ensure that their particles follow the Fermi-Dirac [wolfram.com] distribution.

    Thus, my argument is: you're better off doing the experiment.

  • What scares me isn't that technology is advancing at this dizzying pace, but rather that we are still utilizing that technology to study the use and effects of a weapon that has been in use for over 50 years.
  • If we can properly simulate the beginnings of FUSION, that could be an important step towards commercially viable fusion power plants! Cheap, clean, unlimited energy... worthy goal, I would think.

    Very worthy, seeing as the Pentium 6's and the K9's are gonna need a fusion power supply just to boot, and we won't even talk about those in SMP... :)



    bash: ispell: command not found
  • If, by "the old computer," you're talking about Blue Pacific, I believe that White is an entirely new system.

    The "old" systems, though they aren't the most powerful, are still used every day (and every night, too) by a wide variety of projects. There is a heck of a lot of research going on around the clock at LLNL.

    When they mention the nuclear simulations, that may be the biggest problem that they tackle, but there is no shortage of smaller (relatively speaking) projects which need CPU cycles on a regular basis.

    Check out http://www.llnl.gov/llnl/06news /feature-techno.html [llnl.gov] for all the cool non-nuclear stuff they do at Livermore.

    Nate
    ...and hi to Brooke if you're out there at the lab again. How's the rice?

  • There are quite a few supercomputers out there doing things other than nuclear bomb research, but one must understand that that research requires intensive computing. Anything that deals with simulating the motion of large numbers of particles requires intensive computing, not just bomb research.

    Q: Why are such huge computers are needed for bomb symulation, or fusion power research etc...?

    A: For each particle that you want to simulate you need nine dimentions to specify where it is and where is it going, three for position, three for linear momentum, and three for angular momentum. Each dimention specified takes up roughly one FLOP (floating point operation per second), over simplified but it gets the point across. Therefore each particle requires nine FLOPS. So a 12 teraflop machine can keep track of ~1.33E12 particles. This may seem like a lot of particles to keep track of but considering things like Avregadros(sp?) number...

    An example roughly of how intense this is. This new machine could do a decent job simulating a mid-sized tokamac fusion facility, if you use the quick and somewhat inaccurate way of simulating.

    If you were to take a look at the top 500 most powerful machines you would find that most of the ones at the very top are either doing nuclear weapons testing or fusion power research.

    Oh, in case you are wondering, the government changed one of their supercomputers that was doing weapons simulation over to traffic simulation last year. Traffic simulations also require several dimensions per car, again, three for position .... On top of those they need to worry about where roads go and where unmovable objects go and, of course, people.
  • As someone who will be working daily on this machine at LLNL - maybe I can shed some light. Firstly, nuclear explosions are not the only thing which will be simulated. However, the funding for the machine is primarily for that. (The program is "ASCI" - Accelerated Strategic Computing Initiative). Because they are doing these calculations, the machine will be "behind the fence" (classified). As a result, the only people who will be able to use it will be those with a Q clearance (DOE Secret). By default, this makes it difficult (read: impossible) for your average uncleared researcher to gain access.

    It is possible for unclassified simulations to be run on the machine with a Q-cleared "proxy" user running the code on behalf of someone else - but in this day and age of ultra-tight security in the wake of Wen Ho Lee and missing disk drives - that is highly unlikely to happen.

    However, there is a mighty impressive machine available on the "open side" which will be used for things like weather sim, drug design, etc... In this case it will be the machine ("ASCI Blue") which is currently behind the fence, which will be moved outside. 5000+ PowerPC 604e's, and not a bad parallel environment to work in, I must say...

    The ASCI program is also funding 5 university ASCI centers. These centers are targeted with solving unclassified "grand challenge" type problems which involve similar complexities to nuclear simulations. ie - solid rocket motor simulations (Illinois), Astrophysics (U Chicago), accidental fire scenarious (Utah), turbulence (Stanford), and material modeling/response (caltech). These centers get time (or "fight for time" if you asked them) on the unclassified ASCI machines.

    The unclassified machines are usually just one step behind, or slightly smaller, so they don't make splashy headlines. They are, however, still very very impressive machines, and there is lots of groundbreaking research being done on them which would probably not have been possible (yet) without ASCI

    I (and others) don't particularly like the fact that these machines are used mostly for nuclear simulations - but it's better than the alternative (craters in Nevada), and it's definitely helping push the envelope of parallel computing - which is all I really care about. :-)

    Los Alamos has a similar contract with SGI to supply large machines for them. Sandia has a large machine from Intel, and have subsequently been concentrating on massive linux clusters (ala CPLANT) as their future.

    More information here [llnl.gov]

  • I'm pretty sure your post is a troll, but here goes. While it is theoretically possible to double the number of states that a quantum computer can evaluate by adding a single extra "qbit", noone has yet developed a quantum computer with the capacity to handle something of this magnitude. Furthermore, the simulation of the weapons is done through an iterative differential equation solver that is deterministic! That's right, no search, no funny stuff of any kind, just solve a set of DE's to a specified tolerance, and repeat. So, until some other kind of computer comes along that can do that faster than ASCI White, that's what we're going to use.

    Walt
  • Such as being "sick of" something you hate.

    Personally what bugs me most is people who don't understand the meaning of a figure of speech, and get it the wrong way round. Like "I could care less"

    Oh, and the use of literally to mean exactly the opposite - like when people say "He literally put his foot in his mouth"

  • There was an article a while back about a really really really fast computer which only did gravitational computation; wouldn't that be usefull here? they seem to be wanting to calculate the interactions of zillions of particles, and this could be the way to do it. okay, prolly not -just- gravity, but i'm just noticing the parrallels.
  • Well, the C|Net article said something like 3 months to simulate the first .01sec. Now, I'm going to make the almost obviously false assumption that an equal amount of computing power is required to simulate all parts of the explosion. I know this is false, but I don't know the physics involved, so I'll just talk about simulating the first .01sec in .01 sec.

    So, simple math being your friend, let's assume that we're talking June, July, and August. That's 92 days. Each day has 86,400 seconds. So that gives a total of 7,948,800 seconds in June, July, and August. Now, this amount of time only simulates the first .01 second. So we need to multiply by 100, yielding 794,880,000. This means that the computer needs to be 794,880,000 times more powerful to simulate the first .01 second of a nuclear explosion in .01 second.

    Multiplying this figure by 12 (They said it had a peak of 12.3, but it probably can't do 12.3 all the time), a computer would need to be capable of 9,538,560,000 teraflops to model the first .01 second of a nuclear explosion in .01 second. And the complexity increases from there.

    Damn.

  • So why are they using Power CPUs rather than say the latest MIPS? I haven't seen the latest Power stuff but MIPS has been screaming lately.

    What's with this +1 bonus stuff anyway? Who asked for it and why do I have to explicitly turn it off?
  • How long until a distributed nuclear simulation project? I guess that wouldn't happen becuase of "security concerns," though. originally posted in response to a thread in the first, and now deleted, item. The collective community here has been over this scores of times already. To reiterate: Seti@Home, all of the various key breaking projects, and any other distributed processing project all have one thing in common: the data chunk being processed by one computer is largely independent of the other data chunks being processed by the others. Hence, it is feasible to send a chunk out from the central distributor, have it processed by the proccessing computer, and then return it directly to the processing computer.
    In this application, however, this is just not a possibility. Consider the simulations this computer will be used for, nano-second scale simulations of the initial criticality of a chunk of uranium or plutonium. This is just not something that can be broken up into chunks and handed out for processing. For the simulation to be even vaguely useful, each atom or particle of fissionable substance simulated must interact with all of the other atoms in the simulation. Thus, if someone were to be given a chunk of, say, a million atoms/particle/lattice points, to process, their sub-simulation would need to communicate continuously with all of the other processors sub-simulations. If we look at the system conservatively, and say that each particle will only be affected by the particles closer too it, this still requires the exchange of vast quantities of information at each step. And that's not even taking into account the effects of simulating the neutrons and other sub-atomic particles zipping around. The processors would have to have huge amounts of data transfer ability to make this feasible.
    Of course, if i misread your intent, and you're actually suggesting that every citizen should be given this huge amounts of banwidth to enable the distributed processing, i'm behind you all the way.
  • Did you really say "Although nukes COULD be used for the noble purpose of deflecting incoming Comets/Asteroids."?????

    Are you nuts or what?

    The reason we have an international ban or nuclear reactions in space is so that we dont destroy not just ourselves, but the rest of our known universe..

    If a nuke was set of in near-vacuum space, the chain reaction would still occur, but at huge distances - I'm not sure of the mechanics between explosions in space and bodies of large mass (which will have an awfull lot to do with type and size of atmosphere, if any), but there is absolutely no reason to assume that chain reactions among the far more dispersed matter of near-vacuum will not be absolutely catastrophic.

    Feel free to correct me on this, but isn't there good reason to suggest that based on reactions that happen on Earth, the chain reaction in space could basically rip through all known near-vacuum, impacting every large-matter-body in space aswell?

    In other words, wouldn't it be like, instead of one point nuke within Earth atmosphere, simultaneous nuking of the whole outer atmosphere of every planet or star or comet everywhere?

    Since I am relying on memory posting this, I will research further over the weekend - in the meantime, feel free to reply candidly.

  • Perhaps because only ASCI can afford those way too expensive computers?
    I think when ASCI Red was unveiled, they allowed for up to 50% of the CPU time to be used for non-ASCI research projects.

    The top of the line supercomputers are listed in TOP500 Supercomputing Sites latest list [top500.org]. Clearly there are more than ASCI Nuclear Research computers. I don't remember Slashdot reporting installation of the two new non-ASCI teraflop computers, though. Perhaps only the best is of interest to the slashdot crowd.
  • If you think Windows is bad, try the SP cluster on for size. I've used one. The hardware is great. AIX takes some getting used to, but it's okay. But an SP cluster used for what it is marketed for (a cluster of redundant machines, offering uptimes beating, say, a Sun Enterprise 10000) is a very fragile environment. For one thing, if the RS/6000 that controls the beast crashes (for example, you fat-finger a command and need to reboot it), for all intents and purposes you'll have to reboot the cluster.

    Maybe the SP software came a long way since I last saw it, but what I saw tells me that using the box for anything other than one application that will benefit from massively parallel computing is a waste of money. IBM hypes the massive parallel potential for a good reason, but the sales droids will sell it for applications that just don't work out.

    Which is a pity. The hardware is nice.

  • Would you rather they did open-air explosions to do their tests?

  • From the article:
    "They're trying to solve a very important problem there, "Josephs said, adding he thought the $110 million price tag for ASCI White was actually pretty reasonable. "The only other way to do this would be to take old nuclear weapons and blow them up. This really is a bargain."

    Hold on, because there's no other way to test it, it becomes an important problem? I mean, unless someone plans on shooting off a nuclear missile, it's not that important. Which brings up the interesting question of who in the US is planning on shooting off a nuke? And if you argue that the US is testing to see what happens if someone nukes the US, then I don't see how these calculations are going to make a difference when a nuke is coming our way.

  • You mean a packet with the 'OOB' (out of band) flag set... right?
  • Something that has been bothering me as of late is the speed of computers and how the companies are showing this off.

    I'd like to take into example how I was just at the PC Expo in NYC the other day and was looking at the 1GHz Athlon chips from AMD. Now, it's certainly quite impressive that these chips are running that fast, but how are they showing it off? A TV tuner card showing a video, a windows machine doing nothing special and someone playing a 3D car racing game. What does this show? a) how well your video card works with the TV tuner card, b) nothing and c) how fast your 3D rendering card is.

    What I WOULD like to see in future tests is how fast the Linux kernel compiles. That is, at least , how I test how fast a machine is. Of course, it also then takes into account how fast your disk is (interface and drive), but there is still the raw processing power. Granted, you could argue Gaussian algorithms, but that is FPU speed. I want to see the processor as a whole.

    So, my proposition is: The new benchmark for Slashdot readers should be how fast it compiles the kernel with the default options :)

    My two cents; no refunds.

    --

  • How come almost every time there is a post about supersomputers, they are being used for nuclear bomb explosion simulations?

    As usual, there's a good article over at The Bulletin of the Atomic Scientists [bullatomsci.org] on why there's so much government desire for bomb simulation.

    We have treaties and treaties on why we can't test these devices "for real". Given the desire to upgrade them without "upgrading" them in a way that affects counts or treaties, there's currently a lot of interest in how to re-use existing designs and components in ways that give functionally new weapons, without being listed as such. Converting air-burst devices to near-surface burst devices turns town-killers into bunker-killers, but it doesn't have to appear as building new weapons or changing the type of existing ones.

  • The 'euphemism' "Stockpile Stewardship and Managment program" actually aims for a way of not having to make any more nuclear weapons by maintaining and conserving the current ones.

    If it turns out that the expiration dates of the weapons is far enough in the future, no new weapons will have to be made for a while, BUT to verify the durability, some experiments will have to be made, and since real-world nuclear tests politically sensitive (how's this for a euphemism? :-)); doing the experiments virtually, would be a boon.

    Personally i wouldn't mind seeing all of those weapons passing their expiration dates, but spontaneous detonation would be rather nasty.... (though highly improbable i guess)

    Sander
  • by Anonymous Coward
    I couldn't agree more. Testing of nuclear weapons should only be done on humans or anuimals. There is no need to bring computers into this.
  • Why? Bogomips means "bogus - mips" and is only a relevant rating between computers of the same architecture. A bogomip rating comparison between an Alpha 500MHz and a PIII 500 MHz isn't as reliable as a comparison between 2 PIII's with diffenent clocks speeds. The "speed" difference isn't reliably represented. It'd still be cool to see the rating, though!
  • The physics of a nuclear blast lends itself to needing the most powerful computer you can get. Besides, with the limitations on nuclear testing, you are forced to depend on computer simulations which naturally directs your budget into obtaining larger and larger computers.

    By the way, the weather service did get themselves their own parallel computing cluster [noaa.gov] (running Linux, by the way). Incidentially, the progress made in simulating nuclear blasts carries directly over to astronomers who simulate supernovae.

  • er, a piddly nuke would do this whereas the 22 megatonne / second fusion ball that is our sun for the past ~5 billion years hasn't managed to do this?

    I suggest that the main reason for this treaty is that nobody likes the idea of nuclear weapons in orbit pointed down. Hell, a kinetic harpoon is destructive enough at 11 km/s.

    or did you miss the </sarcasm>?
  • I just went to a meeting about that last week... For awhile, both machines (ASCI Blue - the 3Tflop and ASCI White - the 12Tflop) will be available. This is because it usually takes many months for a system that size to become stable enough that users can do "production" work on it. So although they're assembling it at LLNL now, it probably won't be used by your everyday user until December or so.

    When the machine is ready for "general" use ("general" as long as you have a Q clearance!), then the plan is to move the current machine to the unclassified side, and open it up for use by the ASCI alliances [llnl.gov] and other unclassified users.

    They should be able to simply add it on cluster style as you suggest, since the current machine on the unclassified side is basically the same architecture. I can't tell you for sure that it's what they'll do - but if they do, they should have about a 4-5 tflop machine for unclassified use by the end of the year.

    As far as what will happen to ASCI White when they're done with it - it's only being rented from IBM - so it'll go back to Kingston or whereever...

    --Rob

  • maybe it needs to run an application that tracks DOE hard drives
  • onk the caveman wants 12 teraflop computer so onk can look at super high-res pictures of nude cavewomen. maybe even 3-d vr simulation of them. unf.
  • a Beowolf cluster of these?
    Seriously though. This is using the Power III-3. Isn't the INSANELY fast Power IV just around the corner? When will that sucker arrive?
  • Actually, they could hand them out to the public.. these are not tactical simulations, but actual particle simulations.

    I believe the problem arises in the amount of shared data required between nodes... it's not like cracking a key where you can just chop they keyspace up into as many pieces as you like and work on them all separately.. you have to have the entire dataset in order to work on it properly..
  • They aren't working to improve the explosion. It works well enough already.

    They are ensuring that the bombs will go off reliably. The thinking is that if we have a working nuclear stockpile, enemies will think twice before attacking us. If our weapons get old and fail to work, the deterrence will be lost. How do we know if our old weapons will work? Blow one up for real, or simulate it.

    I'd rather them simulate it.
  • Yes it does, 1600x1200 @ 5 billion frames per second ;)
    ---
  • by toofast ( 20646 )
    How long before the Beowulf cluster posts?

  • ... a Beowulf cluster of these puppies
  • Should this article be given (-1: Redundant)?
  • They already used 'Blue'. Twice.
    Asci Blue Pacific (Livermore) 1999
    Asci Blue Mountain (Los Alamos) 1998

    And Red.
    Asci Red (Sandia) 1999

    So, being American.. it's time for white, yes?

  • I went into the machine room for a major animation company during an interview and among the racks of Origin 2K's and disk arrays they had
    a couple of midrange Cray's.

    I assumed they were for graphics processing (that's what the Origins were for), but it turns out they were simply the fastest fileservers on the market at the time of purchase (an important thing if you're pushing around mutli GB files).
    --
  • efficiency.

    More kill for the buck.
  • by Anonymous Coward
    Screw that where's my damn flying cars???!!!
  • There aren't nuclear weapons simulations per se because such things require tuned parameters from actual tests and are not directly useful for anything other than making nuclear explosion, but there are open source (mostly public domain actually since they were government sponsored) particle simulations which can be used for things like simulating the propogation of radiation in the human body.
    --
  • I think they should put this super-computer to the real test, not just trust IBM for it. They should ask it to provide the answer to life the universe and everything... of course we already know the answer is 42, but the computer doesn't.
  • by FascDot Killed My Pr ( 24021 ) on Thursday June 29, 2000 @03:00AM (#969088)
    IBM Constructs New Fastest Computer
    IBM's ASCII White Super Computer Unleashed

    It's CmdrTaco coming down the stretch on New Fastest Computer, but here comes timothy on ASCII White, it's Taco, it's timothy, Taco, timothy....timothy by a nose!
    --
  • Apparently, you need to release a similar amount of energy to simulate in real time...
  • Maybe they should have just listed the floor square footage or cubic feet (or centiliters) for volume. There ya go, a tough to grasp representation. I'll admit the elephant thing is a little dumb (as is the calculator analogy), but the basketball court analogy seems to be right on target with giving a semi-useful description...

    Of course, they could have just said, "It's way bigger than a regular PC" but that wouldn't have helped ;-)
  • by Fross ( 83754 ) on Thursday June 29, 2000 @03:44AM (#969092)
    to get an idea of the scale of this, the whole SETI@Home project is generating about 8 TeraFLOPS [berkeley.edu]. This thing tops that by about 50%. So it could process about 500000 SETI units per day. or just under six *per second*.

    Keywords: Quake 3, Kernel compilation, Beowulf, Toy Story 3 in realtime? :)

    Fross
  • The supercomputer industry is fading. Only the government is willing to shell out big bucks for top end machines. They only have a handful of applications that qualify- weather prediction, bomb simulation, airplane design.

  • Large scale distributed computing over heterogenous networks are only suitable for computation that can be broken into discrete computational work pakets.

    This is exactly what Distributed.net and Seti are doing, as far as sending a discrete block of data, having the client crunch it, and then return the result.

    Some computations require a network of interdependencies int he data set. Such as large scale simulations, which is what this computer will be doing. In such scenarios, there is no way to 'break up' the computational tasks into neat little discrete packets, since they are interdependant.

    This requires lot of very fast networking (ccNUMA, very large SMP, etc) on hardware designed specifcally for this type of task.
  • When you get down to it, a "new" (just off the assembly line) nuclear device is relatively controlled and predictable. It won't blow up until you tell it to, and you know how it'll react when you push the button.

    As they age, though, you don't really know what'll happen. And that's why we have simulators. As others have pointed out, there's only 2 ways to know if a device that's been stockpiled for 15 years will work - simulate it or take it out to the desert. Now, since we can't exactly take one out back and set it off, we buy a bigass computer and simulate it.

    Or would you prefer that one just randomly go off while sitting at the dock inside an Ohio-class submarine at Groton, CT? Or maybe, if we return to the 50s mindset of 24-hour alert for bomber crews fully loaded for WWIII, with the occasional scrambling to test readiness, that B2 flying over Topeka hits some turbulance and levels half of the already-flat state of Kansas?
  • Actually, in a sense, this is a "beowolf cluster" - it's a big collection of separate machines with high-speed interconnections. It probably doesn't use "Beowolf" technology, however. Read the press release.


    ...phil
  • by SvnLyrBrto ( 62138 ) on Thursday June 29, 2000 @07:08AM (#969104)
    Pretty much anything that adds to the knowledge base is ultimately a GOOD THING(tm). For starters, when you fund a project like this, techies learn HOW to build faster, better, more powerful supercomputers. And it's a research project that'll add to our understanding of atomic physics...

    You know, more goes on in a nuclear weapon than fission. If we can properly simulate the beginnings of FUSION, that could be an important step towards commercially viable fusion power plants! Cheap, clean, unlimited energy... worthy goal, I would think.

    Additionally, even if the data from this box *IS* indeed ONLY ever applied towards nuclear weapons, that's still MUCH better than the alternative: which is to withdrawl from the Test-Ban Treaty, and start setting the things off for real again.

    SIMULATING something is NOT morally equivelent to DOING that very thing. Otherwise, quite a lot of Quake, Carmagaddeon, and GTA players would be sitting in jail right now.

    And, hell, even nuclear bombs, as they exist now, and as they could be refined, have potentially non-military uses. No, I'm NOT talking about Teller's harbor in Alaska; or the ridiculous scenarios in Deep Impact or Armagaddeon... Although nukes COULD be used for the noble purpose of deflecting incoming Comets/Asteroids. The implimentation, as presented, just sucked.

    Actually, what I'm talking about *WAS* mentioned in Deep Impact. I'm talking about the Orion drive. If we are ever smart enough to withdraw from the ridiculous treaties which prevent it's deployment, Orion could be the answer to all of our short term space exploration problems! Until we perfect fusion, it IS the most powerful drive system proposed for deployment. Imagine how FAST we could get to Mars, and how much equipment we could take along if we used Orion, rathar than ridiculously inefficent chemical rockets!

    Or, for the peaceniks out there... Wouldn't that be the ULTIMATE "swords into plowshares" situation? Imagine... the nuclear stockpiles of the world, ultimately directed not towards mutual annihilation, but towards the exploration of the final frontier!

    We HAVE the way, all we need is the will.

    john
    Resistance is NOT futile!!!

    Haiku:
    I am not a drone.
    Remove the collective if

  • First, money is going into Fusion, and not "simulating it." For the last decade work has been committed towards NIF, or National Ignition Facility. NIF will be the largest laser, replacing the recently removed Nova laser. The Nova laser could only get to the beginnings at fusion, at best, and did this at an extremely large energy loss. The Nova laser design has been copied around the world, as part of decreasing the nuclear tests (the mathametics is similar enough that we can test "in lab"). Actually, France had LLNL build their copy (easy to tell, as the paint is the livermore colors), due to cutbacks in their own departments (I forget the reason, to be honest). NIF is an ICF laser, and is currently being built at Livermore, and was origionally projected to be completed in 2001, but likely will be fully operational in 2003. ASCII and NIF should be able to work hand in hand for supplying nuclear research without nuclear weapon testing.

    On early projects I saw, oil was thought to be nearly depleated by 2002, and that by 2010 the first fusion power plant would be operational. NIF creates low-level (and low amounts of) nuclear radiation, unlike fission. Both fission and fusion plants could be designed to be safe, though the current water method for fission is cheap so its heavily used. To build either in the U.S. would be impossible, due to lobbying and lack of licensing.

    Now, for the nuclear-based space issues, that's ridiculous. Current fusion has only begun to obtain more energy than it took. More importantly, NASA almost didn't launch a probe 1.5yrs ago because it was feared that the nuclear material it was using (for power) could be caught in the atmosphere. If any space-bound vessle blew up and got caught in the atmosphere, the radiate would spread and cause severe global problems. Sending up the radioactive materials is highly risky, so it is rarely ever done.

    If you wish to learn more about NIF and nuclear energy, I recommend you look at LLNL.gov, which for years has had an excellent set of pages explaining ICF, and other details for visitors.
  • Isn't the idea of these aggregate computers (clusters, whatever) that you can just keep on growing them?

    Add a couple hundred nodes, buy another switch or two, increase your flops by a couple of hundred g?

    Johan
  • by ch-chuck ( 9622 ) on Thursday June 29, 2000 @03:57AM (#969116) Homepage
    These things must be the latest fashion in international peeing contests - It used to be that the US was upset that the USSR had enough missles to blow up the US 20 times over, and we USians could only blow up the USSR 15 times so we (USians) had to make and deploy more missles to acheive 'parity' and get the USSRians back to the negotiating table.

    Now-a-days, I guess the US is afraid that China will have better nuke simulators than the US so we gotta beat 'em at it, it's "Keeping up with the Chin's" all over again.

    I'd rather see the funds go toward a modern super-collider but, pfft, I only pay 1/3 of my income to taxes, I don't have any real say in how it's going to be spent.
  • The nuclear simulations do help in the stockpile stewardship program, by stopping those large creators. Also, from what two lab officials working heavily on the NIF project (and worked on past projects) told me, much of the work allows scientists to keep the stockpile updated. Old bombs become dangerous, and the government ruitinely signs treaties requiring that new techniques must be created. The advancement in the research alone helps numerous industries, and building the machines of course fuels that technological research. So, its not all that horrible, but this research (in treaties) help stop other countries from conducting nuclear tests. Simulating them in the lab is far better than on bikini island.
  • ohh! I forgot to tell you real uses for nuclear weapons, past war. A long time ago there were some models designed for construction purposes, in which the blasts cleared the land and left no residue. The area is safe and normal after 48 hours, and will not be detected. Lab employees must wear badges so the lab can check for any nuclear exposser, and those that have entered the pit were checked out as fine. There is an old picture of the pit, with a small shack in the bottom. I'm sure this was in an old issue of science, or some magazine when it was first done.
  • Nuclear simulation, like fluid dynamics, is basically a cellular simulation -- make several bazillion cells, time step each one, communicating only with the neighbors on each step.

    (I think. I'm actually making this up as I go along, so add salt to taste)

    Now the problem is that since the entire simulation goes in lock-step, you limit the number of steps by not only computation speed, but also communication speed between the nodes.

    I presume that there are smart approximative approaches that can be used to assauge this, but it remains the case that distribution and cellular simulation just don't go.

    Johan
  • lol, alrighty. I'll give you that. The early uncontrolled were triggered by fission reactions, and I don't know about current methods.
  • Isn't the idea of these aggregate computers (clusters, whatever) that you can just keep on growing them?

    Add a couple hundred nodes, buy another switch or two, increase your flops by a couple of hundred g?



    Any idea if that's what they did? None of the articles have said whether they just plugged 9 more teraflops worth of power into the existing 3 or put a whole new 12 tf system in. That would be interesting to know.

    Kintanon
  • My Geforce 2 GTS can render that nuclear blast in real-time! ;-)

  • Most people who ramble on about NP have no clue what the N actually stands for. Your question has no real meaning in the sense that it asks about an attribute that is not associated with the question (rather like saying "How many doors are on the dog?"). The simulation is polynomial w.r.t the number of particles being simulated, but exponential w.r.t to the mesh granularity; it's entirely practical to approxiamate systems of non-linear differential equations to as much precision as one wants to wait for. In this case, a more powerful computer means that one only has to wait months instead of years to get results starting from physical first principles (instead of experimentally derived hueristics).
  • NO!!! The operating principles are as removed from a beowulf cluster as a bicycle from a car("Yes, they both have wheels, gears, and are made of metal."). The RS/6000s are not commercial off-the-shelf parts; they are substantially modified to allow greater internode communication bandwidth and lower backplane latency. The network switch for SMP (known as an interconnect) is orders of magnitude faster than even gigabit Ethernet; it is entirely custom built and accounts for a majority of the developement cost of the machine.
  • The OP's comment is utterly moronic. Solving ODEs and PDEs is a task perfectly well suited for a conventional computer. Quantum computers do *not* double their performance each time a single atom is added; not even when a single qubit is added to the device. Obviously, you are a troll or otherwise attempting to exploit the moderation system.
  • by redgren ( 183312 ) on Thursday June 29, 2000 @04:05AM (#969137) Journal
    Actually, if you ask anyone who is working on these (and I am one of those people - standing in the middle of this monster is just plain cool), it has gone beyond national pissing contest. It is now a corporate pissing contest, and we all know that corporations are bigger than most governments. The US government (taxpayers) funds the competetion, but this is all about IBM beating Sun in the latest round.
  • Yeah the Japanese machines are probably TCM based sysplex units a-la older IBM type ES9000 mainframes. I believe the largest off the shelf TCM machine is a 12-way. These processors are enormously fast, consume tremendous amounts of electricity and throw off vast quantities of heat which is why they're water cooled. If we compare the performance of TCM units vs. the latest CMOS mainframe class CPU's it still takes about 3 CMOS to match the raw performance of 1 TCM. Now moving down the scale, the IBM-like mainframe class CMOS CPU's themselves are built specifically for mainframe machines and have very very high performance baselines. How high? Hard to tell since IBM will not publish performance benchmarks for mainframe machines that can be compared to other types or brands. Instead they use an internally derived benchmark that uses a 'commonly' know basic performance figure based for example on some well known IBM class mainframe like a 9021-831 or something like that. Any other machine is evaluated as a factor or that. At any rate the latest mainframe class CMOS machines have complexes or the rough analog of SMP cages that contain at least one CPU (up to..I don't remember, you can check). Each complex or base machine model is then sysplex'd to other same-type machines up to 12 or 14 machines or even higher. This is what the Hitachi/Fujitsu machines do. They build an x-way complex and then sysplex all the complexes together. As a rough comparison a VERY large commercial sysplex is typically a 12-way with each complex containing 12-24 individual processors for a total of 144 to 288 discrete CPU chips. This honestly is the high end of the high end for standard (non custom built special purpose) mainframe class machines. Compare this to an IBM RS/6000 SP2 frame with say 8 nodes of 12 processors each and ganging 20 or 30 or more frames together across a second level backplane switch for a total assembly of at least a few thousand discrete CPU's to more or less the same work. At least in the commercial world. In the nuclear simulation world obviously you want the highest possible FP performance so a TCM based mainframe design is enhanced with additional or different vector processors compared to simply exploiting the general purpose FP performance of whatever RISC CPU you're using. Another reason why the numbers of CPU's in the two classes of machines is so different.
  • by Blue23 ( 197186 ) on Thursday June 29, 2000 @04:08AM (#969139) Homepage
    IBM is really working to keep the top. They've got another one on the drawing board that's a different architecture then this called SMASH (love that name!). It's for "Simple, Many, And Self-Healing".

    Here's a link to an article, but's it's a bit dated:
    http://www.ibm.com/news/1999/12/06.phtml

    =Blue(23)

  • by tringstad ( 168599 ) on Thursday June 29, 2000 @04:13AM (#969142)
    Dear Citizen,

    We have built this giant computer to simulate Nuclear Explosions. Previously, we couldn't predict the outcome of a Nuclear Explosion. We did not know if it would kill a few million people, or a few billion. Until we had the ability to simulate it we couldn't be sure, and if we aren't sure, then we can't protect you. So please continue to send us more tax dollars to support the electric bill for our new Nuclear Explosion Simulator(TM) and we can continue to protect you. Also, it's good for children.

    On an unrelated note, please feel free to update your PGP keys to the longest possible key length you can use, we believe you have every right to your privacy.

    Yours Truly,
    Big Brother

    -----

    On a more serious note, how much ass would we kick if we could get this badboy to join Team Slashdot over at distributed.net?

    -Tommy
  • If they had posted an article (or two) stating that the U.S. was going to resume above-ground testing, how would that make you feel?

    This is the "alternative" that they are always talking about in those debates. So, quit whining or we're gonna have to make Nevada glow.


    #VRML V2.0 utf8
  • by Durinia ( 72612 ) on Thursday June 29, 2000 @04:48AM (#969146)
    If you're talking about the $100-200 million dollar behemoths like this one, you're right - the government is the only one that can shell out that much for it, and actually use it for something. As a whole though, the supercomputing market is not disappearing. It may not be a quickly growing market, but the buyers are there nonetheless.

    There are a lot of industries that use these technologies - major car manufacturers (Ford, GM, etc) use them for their designs, and to do crash test simulations. It actually saves them tons of money in the end - not having to build a bunch of prototypes and running them into walls :). Boeings 777 was actually built straight out of the computer - no mock-up models or anything. Believe it or not, even Disney is actually a big supercomputer customer.

    Have you been watching the news about the Human Genome lately? Those companies (and the gov't too) said that high-powered computers accellerated their research by a huge margin. You can bet that our future biotech industry will try to stay ahead by pushing the speeds of simulations.

    You're right in that the divisions of the gov't are shelling out for the biggest computers, just don't ignore the business sector so easily. I think the word "fading" is inaccurate - "constant" or possibly "stagnant" might be a better description.

  • Since the other story is gone from the home screen, jump to it and steal all the +5 stories and repost them here for free karma!
  • How come almost every time there is a post about supersomputers, they are being used for nuclear bomb explosion simulations? While I realize that this is a better thing to simulate than to actually do, aren't these computers being used for anything else? Is it that the people who these computers are being built for only want them for those purposes? I just think it would be great to see an announcement mention that a supercomputer would be used for analyzing weather patterns, help with the human genome mapping effort, or something else, well, different. :-)

    Ok, What I want to know is where did the old computer go? They had a 3.??? teraflop computer before. Now they havea 12.??? teraflop computer. What did they do with the 3? Scrap it? Give it to another branch of science? Sell it? Stick it in a warehouse? Why can't we take it and set it up in a big room and let every research facility around that wants time on it buy some. Or even just allocate X amount of time per month for each scientific institution and let them use it to further research. It would very much suck if they just threw the thing away....

    Kintanon
  • The mainframe buisness is picking up again, though, and todays mainframes have more power than the supercomputers of years past. Heck the IBM S80 (not a mainframe, just a mid-level machine) kicks the pants off of older mainframes and some supercomputers from several years ago. The power is there now in a cheaper, more accessible form - not every evil overlord needs a supercomputer when even a regular PC or low-end Intel-based server is far more powerful than they used to be.

    Darn exponential curves 8^)
  • Well, it's an IBM computer and I don't think IBM would really want to be using chips designed by SGI/MIPS. SGI has constructed ASCI Blue Mountain (a 3 teraflop machine) and has submitted a bid to build a 30 teraflop machine for the next phase of the ASCI program.
  • No. The interactions between particles during the initial phases of a nuclear explosion are highly nonlinear and often not in local equilibrium. The interactions are orders of magnitude more complex than the gravitation force; there is no known way to recursively block and calculate aggregate forces and effects. Besides, the objective to simulate nuclear explosions from first principles means that the usual simplifying hueristics cannot be used; one cannot substitute simplier equations (if an explicit form even exists, which they usually don't) for the systems of differential equations that need to be solved.
  • You need internode bandwidth and low communications latency; this type of simulation can only be done with a large, monolithic memory space machine.
  • "How the power of this computer compares to that of Distributed.net or similar projects?":
    A comparison is not really possible. d.net can do more raw operations per second if one just adds up the total power of all the machines involved. However, ASCI White has far better internode communication. For the purposes of during highly paralizable calculations like distributed FFTs on blocked data and brute force sieving or key searches, d.net is probably faster. For the purposes of doing tasks like numerical linear algebra, nuclear/non-linear dynamics, weather forcasting, etc. ASCI White would be faster.

    "How feasible is the distribution of such a computation? Are all the calculations similar, or would a lot of different computational code have to be written?":
    It would essentially be impossible to distribute the computational task, even if the intial value data could be distributed (and it can't; even simplier FEA simulations routinely have datasets exceeding 30 GB), the process would require so much internode communication that any d.net type system operating over a heterogenous, low-bandwidth, high-latency, public network like the Internet would never work.

    "Are there any such systems already in place? Currently, I'm only aware of one "useful" system, and that's ProcessTree (damn, I lost my referral number). SETI@Home is arguably useful, depending on whether you believe there is extraterrestrial life that uses the same radio waves we're scanning and is sending signals we could interpret.":
    Not for using distributed computing for these kind of tasks.
  • Interesting. Despite the fact that the timothy article made it first, it got deleted. Hmmmm....

  • by Tower ( 37395 ) on Thursday June 29, 2000 @04:26AM (#969162)
    IBM's ASCI
    Draws 1 Point 2 Megawatts
    The West Coast Goes Dim
  • by Matt2000 ( 29624 ) on Thursday June 29, 2000 @04:27AM (#969166) Homepage
    And my vote for worst processor name in current production: IBM's Power3-III!

    Jeez, get some imagination ya nerds.

    Hotnutz.com [hotnutz.com] - Funny
  • And how come they only give us the flops count?

    how about more information like frame rate?



    __________________________
  • I agree in principle with you on this, however I don't think it is the function of government to liberate people from their stresses and difficulties.

    I believe it is time for us to reconsider the role of the military. After WWI (or was it WWII) we renamed the War Department into the Defense Department. This reflected a more peaceful mindset. Since we haven't had any defense of land and life in the US for over 50 years, it seems appropriate to rename it to the Foreign Economic Interests Department, since the military is used as a pawn to secure American economic interests.

    Of course, this is a separate matter again, but certainly there is something better to simulate than an explosion. What possibly could they seek to understand about it other than how to improve it? This being the last thing society needs.

  • OK, you've already been moderated down (which is
    good), but honestly--what else would you have them
    do? Play Quake at 50k frames/sec.? Crack crypto
    keys?

    'Nuclear simulations' doesn't just mean figuring
    out how many people they can kill per megaton.
    It ties in plasma physics and a dozen other
    related fields, which tie in closely to
    astrophysics, i.e. how stars happen. Then there's
    the residual interesting bits, which can lead to
    advances in almost any random field. (maybe not
    random, but indeterminate)

    Fundamentally, they're using tons of computing
    power to investigate stuff we _don't_know_ yet.
    They're not just cranking out bigger numbers
    faster, but looking in different directions.

    To say, "I don't think we need more (fill in the
    blank) research" is to utterly fail to understand
    how good science works, and ties together.

  • the difference between the 16 way nodes in the ASCII white and a standard RS/6000 node is largely a matter of packaging. they are half-width 4u rackmount boxes instead of standalone or full width. you can buy the same (electrically) 16way smp node as a standalone webserver or workstation.

    the switch is what puts the super in this supercomputer. but when you break it down, it is just a fast network. yes, orders of magnitude better in all ways than 10base-t but still just a fast network.

    and as far as operating principles, it is just IBM's flavor of MPI. nothing special there. all the money went into the switch.

    maybe the horsepower equation deserves the bicycle/car comparison, but the operating principle is the same. i.e: a bunch of standalone unix nodes, connected by a high speed network clustering software and MPI (or PVM).

  • With these computers, you can test nukes without having to actually blow them up.
  • This is a nice piece of kit to say the least, who wouldn't want one for themselves, but look at the use it's being put to - running simulations of nuclear bombs being used. Yes, it's again part of the $1 trillion USian milatary machine, even if it is given a more publicly acceptable face through the Department of Energy's "Stockpile Stewardship and Management Program", a euphamism if I've ever heard one.

    Why is it that the Pentagon still gets to spend so much money on fancy new toys for a war that will now never come? The USSR has collapsed and since the US is now sucking up to China it looks like there isn't going to be the proposed World War III that US military leaders have been hoping and planning for for decades. And whilst this $1 trillion goes into the military black box, poor people starve on the streets and can't afford even basic health care thanks to the Randite social policies of the US, despite what their Constitution supposedly guarantees.

    No, on purely technical merits this computer is interesting, but I don't think that we, as reponsible people, should be praising something which is part of a group that contributes in a large way to the suffering of the poor and needy.


    ---
    Jon E. Erikson
  • and being used for solving problems. sure, they're not in the news (because they're not the fastest anymore), but that doesn't render them any less valuable.

    Blue Mountain (at LANL), for instance, was on the order of 6,000 RS10K processors; if you hunt around enough, you can still find the webpages about it at the Lab. (Again, I'm lazy. Sorry. :-)

  • It's really not that hard to upgrade these suckers. Intel has already upgraded the Intel/Sandia machine to use PIII Xeons.
  • Not only that, but the POWER chips are WAY WAY faster than the latest MIPS.
  • If you want to argue that nuclear weapons should be abolished, fine, I applaud you. But our government is nowhere near such an abolition. Given that, it makes absolutely no sense to simply "trust" that weapons built twenty years ago will function perfectly if we "need" them. That is, given that nuclear weapons are still very much a part of our strategic arsenal, it would be utterly foolish for us not to guarantee (at least to the extent possible) that they still work. That's the point of the "Stockpile Stewardship" program. In the context of a society with nuclear weapons, there are two real alternatives -- spending oodles of money just to keep them around, plus having everybody hate us for having such power, but NOT KNOWING if they're actually going to do us any good or not; or actually periodically doing above-ground tests to see how the weapons are holding up. I find the simulations a more palatable option than either of these.

    Sorry to rant a little. My point is just this: argue all you want that we shouldn't have nukes; write your congressmen, campaign on Capitol Hill, etc. I wish you luck. But until that day comes, it makes sense for us to do this sort of simulation.

  • This machine can run 100 hours in a row? It takes 2 hours to boot? It takes an army of white suite programmers to support? Damn! I was hoping I could use it to check my email, but now...
  • Of course, depending on the number of procs, you get different results with the different -j options
    make -j2 zImage
    make -j4 zImage
    make -j zImage
    etc...
  • Now-a-days, I guess the US is afraid that China will have better nuke simulators than the US so we gotta beat 'em at it, it's "Keeping up with the Chin's" all over again.

    Well China being so close to Japan, wouldn't you be worried about them buying mass quantities of PSX2 units? I mean come on, this ASIC thing may be pretty fast, but nothing beats a PSX2 for weapons design, you know, with that easy to use controller and supercomputer classed processors.

    -- iCEBaLM
  • Do you really need 12 teraflops to tell you the result of a nuclear explosion? Hell, my old 386 could figure it out for ya.....
    1. KABOOM!!!
    2. Things die
    3. Earth contaminated
    4. ... shouldn't have done that
  • by MSG ( 12810 )
    The day you realized that atoms, too, had subparticles, that was an epiphany.
    The day you realized that splitting an atom would release megatonage of energy, that was an epiphany.
    The day you realize that it would take over 25 years to simulate one second of the blast in a computer, that was an epiphany.
    It's a new kind of physics, you need a new kind of software.
  • "boot up quake 3"? You scare me, my friend. I'm sure some people would like to run quake as their OS, but...
  • Gotta agree with that. I also want to know what they are hoping to discover about the first 1/100th of a second anyway. We know what happens, there's a bright flash, shit loads of heat, and lots of people die, either immediately or later. What else is there to know about a nuclear explosion. Surely if they're going to spend this amount of money on a supercomputer, they could put it to better use. Bill Gates could always use it for his bubble sort, eh? ;-)

    Now weary traveller, rest your head. For just like me, you're utterly dead.
  • by FattMattP ( 86246 ) on Thursday June 29, 2000 @03:19AM (#969212) Homepage
    How come almost every time there is a post about supersomputers, they are being used for nuclear bomb explosion simulations? While I realize that this is a better thing to simulate than to actually do, aren't these computers being used for anything else? Is it that the people who these computers are being built for only want them for those purposes? I just think it would be great to see an announcement mention that a supercomputer would be used for analyzing weather patterns, help with the human genome mapping effort, or something else, well, different. :-)

    At least we can run our own weather simulations at home with the Casino-21 project [rl.ac.uk]. How long until a distributed nuclear simulation project? I guess that wouldn't happen becuase of "security concerns," though.

  • No, a 3D game is just about the most stressing thing you can do to the computer.
    A) A GeForce2 GTS can render a hell of a lot more triangles than the proc (even a 1GHz) can feed it. Sure the geometry acceleration helps out a bit, but most games don't use it yet. Thus this racing game (no racing games that I know of use the geomtery engine in D3D or OpenGL) is definately a good indicator of the performance.
    B) The kernel compile is a crappy benchmark. Given the fact that the source tree is some 75 megs, and the fact that it does nothing with the FPU, and the fact that it is much more dependant on bus bandwidth due to the nature of the operation, it doesn't make for a very good benchmark.
    But in the end, all that matters in a benchmark is how well it does what YOU'RE doing. If you want to test raw proc speed, you'll use a synthetic benchmark that just does math ops. If you compile all day, then the kernel compile is a perfectly valid benchmark. If you run 3D games, then the 3D game is a great benchmark.
  • Of course I thought of all that. I simply wanted to point out that a lot of people were looking at this from a "the glass is half empty" point of view.


    #VRML V2.0 utf8
  • Very fast computers are certainly important for nuclear detonation simulations, but one must keep in mind that the simulation is only useful if you can compare it to what you are trying to simulate. Though banning all testing has its political merits, eventually if you want to know whether your model is any good, you're going to have to compare it to experiment.

    Some aerospace critics lay some of the blame of recent rocket failures on just this point, that too much emphasis is being put on rocket simulation at the expense of actually building prototypes and testing them. Certainly it is cheaper to simulate them, but you can't skip too many prototype iterations in the design phase.

...there can be no public or private virtue unless the foundation of action is the practice of truth. - George Jacob Holyoake

Working...