Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:wrong (Score 1) 211

Either Mt. Pinatubo or Mt. St. Helens were far larger than that in terms of energy and vastly more effective at coupling the debris into the upper atmosphere. Add to that the large amounts of sulfur compounds they emitted. So, where was the massive weather disruption or global cooling (or warming for that matter)? It didn't happen. It hasn't happened then or even with Krakatoa or other massive eruptions of less than Yellowstone or Mt. Toba scale.

Both Pinatubo and Krakatoa had noticeable climatic consequences. But those effects lasted only a few years, on the surface. (Krakatoa probably affected ocean heat for many decades.) Tambora helped cause "the year without a summer".

16 nukes wouldn't do much, but a large number of nukes could cause a nuclear winter. For the climatic consequences of that, see this paper.

Comment Re:Why (Score 1) 96

Exascale computers would be helpful for climate modeling. Right now climate models don't have the same resolution as weather models, because they need to be run for much longer periods of time. This means that they don't have the resolution to simulate clouds directly, and resort to average statistical approximations of cloud behavior. This is a big bottleneck in improving the accuracy of climate models. They're just now moving from 100 km to 10 km resolution for short simulations. With exascale they could move to 1 km resolution and build a true cloud-resolving model that can be run on century timescales.

Comment Re:Supercomputers are pretty useless (Score 4, Informative) 125

True, there are some things supercomputers can do well, but the same effect can be reached with distributed computing, which, in addition, makes the individual CPUs useful for a range of other things. Basically, building supercomputers is pretty stupid and a waste of money, time and effort.

People don't build supercomputers for no reason, especially when HPC eats up a large part of their budget.

The main application of supercomputers is numerically solving partial differential equations on large meshes. If you try that with a distributed setup, the latency will kill you: the processors have to talk constantly to exchange information across the domain.

As someone pointed out, modern supercomputers are like distributed computing, often with commodity processors. They look like (and are) giant racks of processors. But they have very fast, low-latency interconnects.

Slashdot Top Deals

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...