Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Yes, you've increased the precision (Score 1) 89

You may turn CP off but you're sure as hell still parameterizing (microphysics!! And just how many choices do you have for those fun knobs!). And 4km is still pretty damned coarse for thunderstorms. But yeah there is clearly more to surface wind intensity than eyewall replacement, that's just an example of something that is thought to play a role and that is definitely handled better with finer meshes (tropical is not my expertise).

We're doing much better, but we've still got a lot of work to do to get hurricanes right - some of it is with the models, some of it (more, in my opinion) has to do with what we feed the models. Hoping GOES R will be a big help. Pray for a launch that doesn't blow up on the launchpad.

Good riddance to cumulus parameterization, when that day comes in everything but climate models (let's not get too carried away) I will dance with glee on reams of printouts of CP code. Microphysics is a big enough kludge but I don't think we're going to be following raindrops and snowlfakes around in our models for a long, long time.

Comment Re:Yes, you've increased the precision (Score 1) 89

The path of a hurricane is somewhat unpredictable (been known to turn 90 degrees for no apparent reason).

We've gotten much better at predicting the paths of hurricanes which are, to a first degree of approximation, steered by larger-scale winds that have gotten easier to predict with time because of improved observational data feeding models, as well as the fact that they are easier to resolve and are dictated by things that don't require lots of parameterization (like what you have with convective clouds and precipitation). What we continue to struggle with is the strength of the surface winds over time, which of course is highly desired at landfall. There are small scale processes going on within hurricanes (involving what meteorologists call eyewall replacement cycles) that modulate surface winds and that are less understood and much more difficult to forecast correctly. Basically, the smaller it is, the harder it is to forecast over long time periods.

If I had to choose between getting one or the other right, though, I'll choose path over surface wind strength. Getting out of the way of a Category 2 vs getting out of the way of a Category 5 is pretty much the same process!

Comment Re:Likely a new gift for the NSA (Score 2) 223

You are basically describing ensemble forecasting, which is very powerful for providing statistically meaning forecasts where you can intelligently talk about the uncertainty of the forecast, something single deterministic forecasts cannot do.

In my research, I'm doing single deterministic forecasts to study what happens with tornadoes in supercell thunderstorms, where I am cranking up the resolution to capture flow that is otherwise unresolved. I get one version of a particular storm, which is good for studying certain aspects of storms, but not good at being able to generalize (that takes lots of simulations).

Both big deterministic simulations and ensembles have their place. Of course, today's big simulation can be the resolution of tomorrow's ensembles! Right now, you can do lots of good science with ensembles. Operationally (weather forecasting) this is basically the new paradigm, although forecasters are slow to change from just looking at the single deterministic GFS and NAM forecasts. The ensemble approach, once we start running hundreds of forecasts at higher resolution that we do today, will transform our forecasting accuracy (and precision). However it will be limited to the amount of good observational data we can feed the models (otherwise GIGO). This is where remote sensing comes in. GOES-R will be a big help.

It will indeed take people from atmospheric science, computer engineering, software engineering, etc. working together to best exploit exascale machines. NCSA understand this and that's what makes it (and other similar organizations) great.

Comment Re:Likely a new gift for the NSA (Score 1) 223

One of the biggest problems of the current large scale HPC machines is users (like you but maybe not you specifically) are typically scientists/analysts who write software that does not scale well. There either needs to be better frameworks for you to work within that handle all the grunt work of doing efficient parallelization and message passing or every atmospheric physicist needs to be teamed with a computer scientist and a software engineer.

Absolutely agree 100%!

Comment Re:Likely a new gift for the NSA (Score 1) 223

You can say "one of" but you can't say "the fastest" petascale machines my friend

http://www.hpcwire.com/2012/11...

I should have added "on a college campus".

My main point is, just throwing more cores at "mostly MPI" weather models is not sustainable. We are going to need to be much smarter about how we parallelize.

Comment Re:Likely a new gift for the NSA (Score 5, Informative) 223

Weather guys want this after NSA's done.

I'm a weather guy - running cloud model code on Blue Waters, the fastest petascale machine for research in the U.S. I don't think we've managed to get any weather code run much more than 1 PF sustained - if even that. So it's not like you can compile WRF and run it with 10 million MPI ranks and call it a day. Ensembles? Well that's another story.

Exascale machines are going to have to be a lot different than petascale machines (which aren't all that different topologically than terascale machines) in order to be useful to scientists and in order to no require their own nuclear power plant to run. And I don't think we know what that topology will look like yet. A thousand cores per node? That should be fun; sounds like a GPU. Regardless, legacy weather code will need to be rewritten or more likely new models will need to be written from scratch in order to do more intelligent multithreading as opposed to mostly-MPI which is what we have today.

When asked at the Blue Waters Symposium this May to prognosticate on the future coding paradigm for exascale machines, Steven Scott (Senior VP and CTO of Cray) said we'll probably still be using MPI + OpenMP. If that's the case we're gonna have to be a hell of a lot more creative with OpenMP.

Comment Re:Vinyl's growth (Score 1) 278

My only point is really undebatable. If, for instance, I want to listen to Wilderness Road's eponymous album, I have to listen to the vinyl, or somebody's copy of it. And it's a fucking great album and I don't need someone telling me that I'm an idiot for wanting to be able to listen to it.

Some real nutty music fans still spin 78s, can you believe it? Lot of music on those shellac platters never made it to any other format.

You cut yourself out of a surprisingly large amount of music (I'm talking about old stuff mostly) if you don't have a turntable.

Comment Vinyl's growth (Score 5, Interesting) 278

I'm a self-proclaimed "audiophile" but not in the annoying, trust-my-ears-only way that plagues the hobby (I'm a scientist, dammit). I have a nice tube amp, great speakers, subwoofer, etc.... and I have a turntable as well (and a network enabled player + nice DAC). Anyhoo.... I can speak to the non-hipster side of things. Yes, some of the growth of vinyl has a faddish aspect to it. But, keep in mind, many musicphiles and audiophiles never stopped collecting and buying vinyl even through the meteoric rise of CD.

If you are a major music fan (and do not have an unlimited supply of pirated needledrops on the internet), a turntable is essential. A lot of obscure stuff was never released on CD. A lot of stuff that was released from the past on CD sounded (and continues to sound) dreadful due to the mad scramble to ride the CD wave; nth generation tapes, some equalized for vinyl, were used as the source material. Thankfully a lot of stuff these days that is selling is remastered versions of old stuff from original master tapes (not copies). You can be cyincal about this (say the major labels are just milking old warhorses) and you can also acknowledge that the digital audio technology has increased astoundingly since the late 80s and 90s. What does this have to do with vinyl? Well, vinyl can sound really good if done well. I won't argue that it is a better medium than digital; it simply isn't. But it has its own charms.

I have bought vinyl reissues that were mastered very well, and the vinyl was quiet, lacking surface noise - but about a third of the time I get burned with either lousy mastering (sibilance and related issues - and I have a very good microline cart) or more commonly, ticks and pops in shrinkwrapped new vinyl (and run through a we clean). This is the way it has always been and will always be with vinyl.

A primary motivation I have for buying new vinyl releases of new music is to acquire recordings that haven't been as dynamically squashed in the digital mastering process. While vinyl releases can be very dynamically compressed as well, as a rule, vinyl releases tend to be mastered with more dynamic range than the digital version (you could argue that this is partly, or mostly due, to physical limitations of the vinyl medium). And yes, I acknowledge that most vinyl is either digitally sourced or goes through an ADA conversion.

But mostly I continue to buy vinyl because it's fun - it's part of a hobby I enjoy very much. Spending hours just sitting "in the sweet spot" and listening to music (from any source - digital, tape, vinyl or whatever) is something I enjoy. So while people scoff at the vinyl "revival" I'm just glad to see there are more choices our there for getting good sounding music.

Comment Re:Nice and all, but where's the beef? (Score 1) 127

Take a look, there is some neat stuff going on with Blue Waters: https://bluewaters.ncsa.illino...

Most science is not breakthroughs; it's usually slow progress, with many failed experiments.

These computers are facilitating a lot of good science and increases like this in our computational infrastructure for research is great news. I do wonder how they are going to power this beast and what kind of hardware it will be made of. 300 PFlop is pretty unreal with today's technology.

Comment Re:Some technical info for slashdotters (Score 1) 61

As a visualization guy, it always makes me happy to see such a good use of visualization. Thanks for providing some extra technical details here! A couple of questions, though:

1) What grid type does your simulation code use? If it's regular grid, have you considered switching to something more adaptive like AMR or unstructured grids?

2) Since I/O is your main bottleneck, have you considered further decimating your output and visualizing in situ to fill in the gap? I suspect your visual analysis is too complicated for current in situ techniques to cover everything you want to do, but I'd like to hear your thoughts on it.

It's isotropic (delta x = delta y = delta z) for most of the storm, and then uses an analytical stretch function to determine the mesh outside of that region. I used the stretch technique of Wilhelmson and Chen (1982; Journal of the Atmospheric Sciences).

AMR has its benefits but adds a lot of complexity, and I tend to wish to go towards less complex, not more. I am more interested in these new heaxongal grids (see: NCAR's MPAS) which have very nice properties. I predict MPAS will be the Next Big Thing and will apply to both large-scale (say, climate) and mesoscale (like CM1) models eventually.

In-situ visualization was something we have played with. But I don't see a huge benefit to me, other than showing that it can be done. The simulation I reported on was not the first try and visualizing it with volume renderer during the simulation would have been cool but consider the fact that I've had the data on disk for over six months and have barely begun to mine it. So really, it's what you do with the data once it's on disk that matters... plus you can visualize it in all sorts of different ways.

I have other ways to see what's going on during the simulation that are near-real time. All I need to know during the simulation is that the storm is not drifting out of the box, and that it's doing something that looks reasonable. I do this with parsing the text output of the model (that spits out global statistics periodically) and I have a way to view slices of the data that was just written that works just fine.

Comment Re:Some technical info for slashdotters (Score 1) 61

I got about halfway through the video before the kids interrupted me (and it). So let me just ask:

Did your model take into account the energy gathering and discharge that would show a multi-amp, million-volt DC discharge? Because the energy implications of that are going to be enormous to the model.

Did it also have a mechanism that generated the lightning discharges of the storm? Because again, the lightning discharges are going to affect the electrical energy available to help / hinder the tornado.

No lightning in the model. I know of some research on modeling lightning in supercells, but I'm pretty sure they are one-way models; the lightning occurs based upon what we know about inductive and non inductive charge mechanisms, and flashes can happen - but they do not feed back into the storm. I don't think they probably feed back appreciably into real storms. Even though a lot of energy is released with lightning, so much more is released due to latent heating (phase changes between solid/liquid/gas) that really drive thunderstorms.

Comment Re:Some technical info for slashdotters (Score 1) 61

Trying to make a model that only focuses on the critical parts of a storm would be like trying to make a model of a car that only focused on one piston.

Good insight in all your comments. Actually, tornado modeling has a long history. Think of those vortex chambers you see at some museums etc... pioneers like Neil Ward studied tornadoes in this manner. Now, think of a numerical model of one of those chambers. This type of work has been done by folks like David Lewellen at West Virginia University.

What makes this new simulation special is that we've simulated the entire storm and allowed "things to evolve naturally" (for a very judicious interpretation of what "natural" is). One could argue that the tornado is actually not the most interesting part of the simulation, but that the processes leading to its formation, and what is going on in the supercell thunderstorm to support the tornado's long life, are the most interesting parts.

Comment Re:Some technical info for slashdotters (Score 1) 61

At the resolution I'm running at: decades, and this would require improvements in algorithms / hardware utilization, as well as finding a way to build and utilize machines on the exascale. Going from petascale to exascale is going to require methods/topologies that don't exist yet, I think (not just GPUs for example). And exascale power requirements are a real problem with existing hardware.

At "convection resolving" simulations (on the order of 1 km resolution - not 30 meters!) where a tornado-like vortex might form and serve as a proxy for a real tornado: That's much closer, and there are different efforts going on right now aiming towards that goal. The biggest hurdle I see is getting enough observational data to feed to the models; otherwise you end up with garbage in / garbage out. Remote sensing is the way to go with some quantites but with others there is really no substitue for in-situ measurements, and instrumentation at the scale we need is expensive.

Slashdot Top Deals

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...