Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Vinyl's growth (Score 1) 278

My only point is really undebatable. If, for instance, I want to listen to Wilderness Road's eponymous album, I have to listen to the vinyl, or somebody's copy of it. And it's a fucking great album and I don't need someone telling me that I'm an idiot for wanting to be able to listen to it.

Some real nutty music fans still spin 78s, can you believe it? Lot of music on those shellac platters never made it to any other format.

You cut yourself out of a surprisingly large amount of music (I'm talking about old stuff mostly) if you don't have a turntable.

Comment Vinyl's growth (Score 5, Interesting) 278

I'm a self-proclaimed "audiophile" but not in the annoying, trust-my-ears-only way that plagues the hobby (I'm a scientist, dammit). I have a nice tube amp, great speakers, subwoofer, etc.... and I have a turntable as well (and a network enabled player + nice DAC). Anyhoo.... I can speak to the non-hipster side of things. Yes, some of the growth of vinyl has a faddish aspect to it. But, keep in mind, many musicphiles and audiophiles never stopped collecting and buying vinyl even through the meteoric rise of CD.

If you are a major music fan (and do not have an unlimited supply of pirated needledrops on the internet), a turntable is essential. A lot of obscure stuff was never released on CD. A lot of stuff that was released from the past on CD sounded (and continues to sound) dreadful due to the mad scramble to ride the CD wave; nth generation tapes, some equalized for vinyl, were used as the source material. Thankfully a lot of stuff these days that is selling is remastered versions of old stuff from original master tapes (not copies). You can be cyincal about this (say the major labels are just milking old warhorses) and you can also acknowledge that the digital audio technology has increased astoundingly since the late 80s and 90s. What does this have to do with vinyl? Well, vinyl can sound really good if done well. I won't argue that it is a better medium than digital; it simply isn't. But it has its own charms.

I have bought vinyl reissues that were mastered very well, and the vinyl was quiet, lacking surface noise - but about a third of the time I get burned with either lousy mastering (sibilance and related issues - and I have a very good microline cart) or more commonly, ticks and pops in shrinkwrapped new vinyl (and run through a we clean). This is the way it has always been and will always be with vinyl.

A primary motivation I have for buying new vinyl releases of new music is to acquire recordings that haven't been as dynamically squashed in the digital mastering process. While vinyl releases can be very dynamically compressed as well, as a rule, vinyl releases tend to be mastered with more dynamic range than the digital version (you could argue that this is partly, or mostly due, to physical limitations of the vinyl medium). And yes, I acknowledge that most vinyl is either digitally sourced or goes through an ADA conversion.

But mostly I continue to buy vinyl because it's fun - it's part of a hobby I enjoy very much. Spending hours just sitting "in the sweet spot" and listening to music (from any source - digital, tape, vinyl or whatever) is something I enjoy. So while people scoff at the vinyl "revival" I'm just glad to see there are more choices our there for getting good sounding music.

Comment Re:Nice and all, but where's the beef? (Score 1) 127

Take a look, there is some neat stuff going on with Blue Waters: https://bluewaters.ncsa.illino...

Most science is not breakthroughs; it's usually slow progress, with many failed experiments.

These computers are facilitating a lot of good science and increases like this in our computational infrastructure for research is great news. I do wonder how they are going to power this beast and what kind of hardware it will be made of. 300 PFlop is pretty unreal with today's technology.

Comment Re:Some technical info for slashdotters (Score 1) 61

As a visualization guy, it always makes me happy to see such a good use of visualization. Thanks for providing some extra technical details here! A couple of questions, though:

1) What grid type does your simulation code use? If it's regular grid, have you considered switching to something more adaptive like AMR or unstructured grids?

2) Since I/O is your main bottleneck, have you considered further decimating your output and visualizing in situ to fill in the gap? I suspect your visual analysis is too complicated for current in situ techniques to cover everything you want to do, but I'd like to hear your thoughts on it.

It's isotropic (delta x = delta y = delta z) for most of the storm, and then uses an analytical stretch function to determine the mesh outside of that region. I used the stretch technique of Wilhelmson and Chen (1982; Journal of the Atmospheric Sciences).

AMR has its benefits but adds a lot of complexity, and I tend to wish to go towards less complex, not more. I am more interested in these new heaxongal grids (see: NCAR's MPAS) which have very nice properties. I predict MPAS will be the Next Big Thing and will apply to both large-scale (say, climate) and mesoscale (like CM1) models eventually.

In-situ visualization was something we have played with. But I don't see a huge benefit to me, other than showing that it can be done. The simulation I reported on was not the first try and visualizing it with volume renderer during the simulation would have been cool but consider the fact that I've had the data on disk for over six months and have barely begun to mine it. So really, it's what you do with the data once it's on disk that matters... plus you can visualize it in all sorts of different ways.

I have other ways to see what's going on during the simulation that are near-real time. All I need to know during the simulation is that the storm is not drifting out of the box, and that it's doing something that looks reasonable. I do this with parsing the text output of the model (that spits out global statistics periodically) and I have a way to view slices of the data that was just written that works just fine.

Comment Re:Some technical info for slashdotters (Score 1) 61

I got about halfway through the video before the kids interrupted me (and it). So let me just ask:

Did your model take into account the energy gathering and discharge that would show a multi-amp, million-volt DC discharge? Because the energy implications of that are going to be enormous to the model.

Did it also have a mechanism that generated the lightning discharges of the storm? Because again, the lightning discharges are going to affect the electrical energy available to help / hinder the tornado.

No lightning in the model. I know of some research on modeling lightning in supercells, but I'm pretty sure they are one-way models; the lightning occurs based upon what we know about inductive and non inductive charge mechanisms, and flashes can happen - but they do not feed back into the storm. I don't think they probably feed back appreciably into real storms. Even though a lot of energy is released with lightning, so much more is released due to latent heating (phase changes between solid/liquid/gas) that really drive thunderstorms.

Comment Re:Some technical info for slashdotters (Score 1) 61

Trying to make a model that only focuses on the critical parts of a storm would be like trying to make a model of a car that only focused on one piston.

Good insight in all your comments. Actually, tornado modeling has a long history. Think of those vortex chambers you see at some museums etc... pioneers like Neil Ward studied tornadoes in this manner. Now, think of a numerical model of one of those chambers. This type of work has been done by folks like David Lewellen at West Virginia University.

What makes this new simulation special is that we've simulated the entire storm and allowed "things to evolve naturally" (for a very judicious interpretation of what "natural" is). One could argue that the tornado is actually not the most interesting part of the simulation, but that the processes leading to its formation, and what is going on in the supercell thunderstorm to support the tornado's long life, are the most interesting parts.

Comment Re:Some technical info for slashdotters (Score 1) 61

At the resolution I'm running at: decades, and this would require improvements in algorithms / hardware utilization, as well as finding a way to build and utilize machines on the exascale. Going from petascale to exascale is going to require methods/topologies that don't exist yet, I think (not just GPUs for example). And exascale power requirements are a real problem with existing hardware.

At "convection resolving" simulations (on the order of 1 km resolution - not 30 meters!) where a tornado-like vortex might form and serve as a proxy for a real tornado: That's much closer, and there are different efforts going on right now aiming towards that goal. The biggest hurdle I see is getting enough observational data to feed to the models; otherwise you end up with garbage in / garbage out. Remote sensing is the way to go with some quantites but with others there is really no substitue for in-situ measurements, and instrumentation at the scale we need is expensive.

Comment Some technical info for slashdotters (Score 5, Interesting) 61

I wanted to give some info on the technical aspect of getting this to work that might be appreciated by slashdotters.

You can read about the Blue Waters hardware profile here. Our simulation "only" utilized 20,000 of the approximately 700,000 processing cores on the machine. Blue Waters, like all major supercomputers, runs a Linux kernel tuned for HPC.

The cloud model, CM1, is a hybrid MPI/OpenMP model. Blue Waters has 16 cores (or 32 depending on how you look at it) per node. We have 16 MPI processes going and each MPI rank can access two OpenMP threads. Our decomposition is nothing special, and it works well enough at the scales we are running at.

The simulation produced on the order of 100 TB of raw data. It is easy to produce a lot of data with these simulations - data is saved as 3D floating point arrays and only compresses roughly 2:1 in aggregate form (some types of data compress better than others). I/O is a significant bottleneck for these types of simulations when you save data very frequently, which is necessary for these detailed simulations, and I've spent years working on getting I/O to work sufficiently well so that this kind of simulation and visualization was possible.

The CM1 model is written in Fortran 90/95. The code I wrote to get all the I/O and visualization stuff to work is a combination of C, C++, and Python. The model's raw output format is HDF5, and files are scattered about in a logical way, and I've written a set of tools to interface with the data in a way that greatly simplifies things through an API that accesses the data at a low level but does not require the user to do anything but request data bounded by Cartesian coordinates.

I would have to say the biggest challenge wasn't technical (and the technical challenges are significant), but was physical: Getting a storm to produce one of these types of tornadoes. They are very rare in nature, and this behavior is mirrored in the numerical world. We hope to model more of these so we can draw more general conclusions; a single simulation is compelling, but with sensitivity studies etc. you can really start to do some neat things.

We are now working on publishing the work, which seems to have "passed the sniff test" at the Severe Local Storms conference. It's exciting, and we look forward to really teasing apart some of these interesting processes that show up in the visualizations.

Submission + - Simulated monster EF5 tornado produced by researchers

Orp writes: I am the member of a research team that created a supercell thunderstorm simulation that is getting a lot of attention. Presented at the 27th Annual Severe Local Storms Conference in Madison, Wisconsin, Leigh Orf's talk was produced entirely as high def video and put on youtube shortly after the presentation. In the simulation, the storm's updraft is so strong that it essentially peels rain-cooled air near the surface upwards and into the storms updraft, which appears to play a key role in maintaining the tornado. The simulation was based upon the environment that produced the May 24, 2011 outbreak which included a long-track EF5 tornado near El Reno Oklahoma (not to be confused with the May 31, 2013 EF5 tornado that killed three storm researchers).

Comment Re:They used to be called UHF TV tuners (Score 1) 237

I never did that but a long time ago (80s) I did listen to some fascinating conversations broadcast in the clear around 1.7 MHz - just past the AM band - off of a cordless phone somewhere near my neighborhood. I had an old Hallicrafters shortwave radio that weighed nearly as much as I did (even more with the big external speaker). I don't remember the details of the conversations, only that it was mostly stupid stuff as would be expected.

Comment $150k? (Score 1) 220

I do most of my research on supercomputers. "Servcie Units" (SU's) are the currency on these machines. They are usually either node hours or core hours. Typical allocations are in the hundreds of thousands to millions of SUs.

I don't know what formula they used to come up with a dollar value. It would be nice to know, however, as I am in academia where real dollar grants get all the attention since they come with that sweet overhead. I'm sure my dean would appreciate the symbolism of getting the college overhead in SU's (and converting them to dollars).

But seriously, these machines are up 24/7 (unless down for hardware fault or maintenance) and while I'm sure they draw more current when the CPU is pegged if this guy was mining bitcoins with his allocation then really all he did was go against the terms of his allocation. Those SUs would have either been wasted or used up anyway. But you just don't mine bitcoin on federal supercomputers, man. Dick move.

I hope he at least used GPU accelerators with his code, the bastard.

Comment Hate Variable Air Contraption (Score 1) 216

I have had an office in three different buildings on campus of my university. The first office was fine. I had a situation in the second building where the noise was in violation of Eurpoean Union standards for noise (I had the level measured with a SPL meter) but a couple of dB too low for OSHA. It was maddening; for months I begged facilities to address the issue. The office suite I was in had been converted from a lecture hall and there was this major HVAC hub above my desk, and it turned out they had the pressure way to high flowing through the vents. I wore earplugs a lot.

In the third building I am in, I have a situation where the temperature fluctuates about 15 degrees F daily. Yes, I measured this and plotted it with a little weather tracker. In this case, the thermostat for the office is located in another office. And the university spent hundreds of thousands of dollars to renovate this old building. I guess that's what happens when you always take the lowest bidder.

I am rather sensitive to noise so I'd rather have the fluctuation temperatures in a quiet office than pleasant temperatures in a noisy office, and I understand that when you remodel you might get weird results like this. But that doesn't stop me from wanting to strangle people.

Comment Re:Does anyone even use Google's office suite? (Score 3, Informative) 89

I use it for "simple" stuff - for instance, it's very convenient to have a place to take notes at meetings (I do a lot of that with my job). Since I always have wifi where I work it's just a matter of opening up the Drive website and creating a new document. And then everything's in one place and it's easy to find stuff with Google's search, which works on document names and document contents.

I do create some "production quality" documents from within the Docs world, and export them to PDF or DOCX so I can share. But these documents are generally simple; the complex stuff I do in LaTeX. I really do not like Word with its seven thousand ways to frustrate me and the weird layout that I've never really gotten used to since they majorly changed it years ago. Libreoffice and Google's docs editor are nice and relatively simple and I find them easier to use. But I go back to Word when I have to which is frequently since "everyone" seems to use it.

It's convenient to have the ability to open attachments (from Gmail) in Drive/docs for quick viewing, but stuff created in Microsoft's Office doesn't always convert very well.

I fully realize what Google is doing by "sucking me in" to their world and having everything I do be stored on their servers. Ever since I bought a Chromebook Pixel and got the 1 TB of Drive space, I'm always finding ways to use it. I know they just want to harvest everything I do - so for the sensitive stuff I have an encrypted (ecryptfs) partition with Dropbox that I can mount on my Linux machines, and for wholesale archival storage of sensitive stuff I use PGP and stick it wherever. If Google Drive allowed the ability to mount the drive partition under Linux like Dropbox does, I would probably "drop the box" altogether.

Slashdot Top Deals

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...