Become a fan of Slashdot on Facebook


Forgot your password?

Comment Re:Slashdot Officially Sucks (Score 1) 86

Hehe - please don't label me a conspiracy theorist! ;-)

This is why I wanted to see the discussion - because my own intuition (which I totally agree is not based on any real world experience of such an event) led me to think that the ice hole wasn't right. Unfortunately, everyone was too damn busy making Soviet Russia meme jokes to actually talk about the physics involved...

But - we've now been able to have a bit of good discourse here in this thread and my understanding has definitely increased from the posts of others...

Comment Re:Slashdot Officially Sucks (Score 1) 86

Cool - I'm ok with that - that's why I came here to see some discussion ;-)

Mind providing some insight on why it wouldn't have? The car "analogy" above does give a good "feel" for why that hole wouldn't have been larger (although the terminal velocity of a rock would be somewhat higher than a car).

A bit of math / physics here would be insightful....

Comment Slashdot Officially Sucks (Score 5, Insightful) 86

After reading the summary and scanning the article (in true Slashdot fasion!) I went to look at the comments... and they are all complete drivel. Tons of stupid jokes and no actual discussion of the event. What the hell has happened here??

Anyway - back on topic: Does anyone else feel like that rock is WAY too big to have only left a 6m hole in the ice? That rock impacting the ice/water would have been an enormous event... it would have vaporized a ton of water and blown the ice away for at least several hundred feet.

Something doesn't add up here.

Comment Re:Benchmarks are bad metrics (Score 1) 258

As an aside - we just bought a couple of OCZ Revodrive 3 x2 (1TB each) cards and have been using them and benchmarking them over the last couple of days for scientific data analysis... DAMN they are fast! We're getting about 1.2GB/s (yes Bytes with a big B!) consecutive reads (which is was our main purpose happens to be).

The only downside we've found is spotty Linux (which, along with OSX is all we use... no Windows here) driver support. We have to actually use the commercial drivers for the Vertex and ZDXL... which are only precompiled for specific Linux kernel versions. Other than that the cards have worked great!

A bit back on topic - if this "turbo" mode were something any app could invoke somehow (with an API call for instance) then this wouldn't be a problem... but since they've only made it work with explicitly named executables it feels a bit underhanded....

Comment Re: Storage. (Score 1) 232

The only drives in my work machine are 3x512GB SSD's in a RAID0 array. This is to deal with datasets in the 300GB range that my code outputs as it runs on supercomputers (10,000+ cores).

When you're trying to make an animation that needs to read all 300GB serially through a file like that SSD's are a godsend.

Just last week I purchased a new workstation for tens of thousands of dollars (don't want to put the exact amount on here). It contains a 1TB "Revo" PCIE card (extremely fast SSD chips that plug into PCIE), 512GB of RAM and a Nvidia Quadro K6000 and a K5000.... all to accelerate this same workload...

Just because you can't think of workloads that would be useful with solid state drives doesn't mean they don't exist!

Comment Re: Storage. (Score 1) 232

I actually agree with this. I do large compiles all day long and when I switched to a 3xSSD RAID0 array I didn't see any improvement in compile time (but it did speed up everything else I do with large data loads). This is on a 12 core Mac Pro... so plenty of horsepower to keep the disks working during a compile.

In order to speed up compiling I just set up a 150+ core distcc array using 13 Mac Pros... THAT sped my compiling up by an order of magnitude!

Moral of the story: compiling is cpu bound.

Comment Simulation Visualization (Score 1) 41

I write massively parallel scientific simulation software for a living (the kind that runs on the biggest machines in the world)... and trying to come up with a way to display GBs or TBs of information from some of our largest simulations can be _tough_.

We use several open source packages ( Mostly and ), but most of our best visualizations are actual done using a commercial package ( )

For some examples check out the YouTube video here:

(That's me talking in the video). Those aren't necessarily our best visualizations - just some that happen to be on YouTube...

We find that the reactions to these simulations are mixed. They are certainly eye-pleasing... but sometimes if you go too far in making it look good it can actually turn scientists off. They will start to think that it looks "too good to be true" (I literally had a senior scientist in a room of 200 stand up at the end of one of my presentations and proclaim that "This is too good to be true!"). Because of this we try to do do just enough visualization that you can see all of the features of the simulation and understand what's happening without going overboard.

You have to realize that a lot of scientists still remember the days when they created line plots _by hand_ for publications! I suspect that as young scientists come up through the ranks this feeling that "slick graphics = not real" will go away.

At least, I hope....

Comment Re:'Simple really... (Score 1) 775

Google Glass actually _helps_ here. If you were wearing one then you can show them exactly where you were at that time....

This is essentially "counter" surveillance that can prove all sorts of stuff about your innocence.

Have you seen all of the videos of Russion car dashcams? Do you know why they have those? To _protect_ themselves from the police (and other drivers).

The principle is the same here...

Comment Re:A pellet stress simulation? (Score 1) 84

Pellets, as manufactured, are _very_ smooth. This is a decent overview I just found from Google:

They start life as powder and then are packed in a way that makes them smooth.

However, just as in any kind of manufacturing: defects happen. A working reactor will have over a million pellets in it. Somewhere in there one is going to be misshapen.

Some of what we can do is run a ton of statistically guided calculations to understand what kind of safety and design margins need to be in place to keep problems from occurring. We can also look at modifying the design of the pellets to insure safer operation. Both of these things are very difficult (and costly) to do experimentally.

My lab (INL) does a lot of experimental fuel work... but we use these detailed simulations to guide the experiments so we can use our money more wisely. It literally takes years to develop a new fuel form, manufacture it, cook it in an experimental reactor, let it cool down, slice it open and see what happened. Using these detailed simulations we can do a lot of that "virtually" to help them decide on experimental parameters so that at the end of that whole sequence they have a bunch of _very_ good experimental results instead of half of them just being failures...

Also, we do actually have a bunch of detailed experimental results to compare our simulations to. Even with this fidelity of modeling we are still not able to perfectly capture what happens in all of those experiments. Even more detailed models (like the multiscale one in the video) need to be developed to be able to truly predict all the complex phenomena that goes on in nuclear fuel.

There is still a LOT more work to do...

Comment Re:A pellet stress simulation? (Score 1) 84


Certainly the nuclear reactor industry has done "just fine" without these detailed calculations for the last 60 years. Where "just fine" is: "We've seen stuff fail over the years and learned from it and kept tweaking our design and margins to take it into account". They have use simplified models to get an idea of the behavior and it has worked for them (as far as the reactors run safely and reliably).

However, the "margins" are the name of the game here. If you can do more detailed calculations that take into account more physics and geometry you can reduce the margins and provide a platform for creating the next reactor that is both more economical and safer. If you can increase the operating efficiency of a nuclear reactor by even 1% that is millions of dollars. If you can keep something like Fukushima from happening that is even more money (some would say "priceless").

The approximate answers (using simplified models) are good - they are in the ballpark. But if you compare their output to experimental output (which we have a LOT of... and it is VERY detailed) the simplified models get the trends right... but miss a lot of the outlier data. That outlier data is important... that's where failure happens. With these detailed models we get _much_ closer to the experimental data.

To get even closer to the experimental data we have to get even more detailed. The movie showed some of our early work in multi-scale simulation: where we were doing coupled microstructure simulation along with the engineering scale simulation. That work is necessary to get the material response correct to get even closer to the experimental data.

Ultimately, if we can do numerical experiments that we have a great amount of faith in, it will allow us to better retrofit existing reactors to make them more economical and safe and design the next set of reactors.

Slashdot Top Deals

"Card readers? We don't need no stinking card readers." -- Peter da Silva (at the National Academy of Sciencies, 1965, in a particularly vivid fantasy)