Comment Re:Yawn (Score 3, Insightful) 157
I think what's most important is now we have the mathematical models in place that allow us to simulate convincing sounds rather than "sample and include". For the creative types, this will save a ton of effort and money. It also has implications for games, e.g. with the given environment model, be able to produce convincing sounds in real-time rather than taking sound samples mixing them with reverb, attenuation, positioning, etc.
Yes, absolutely! I see it as analogous to vector graphics vs bitmapped graphics. Vector audio is THE holy grail of accurate sound reproduction.
If these guys can pull this off, it will be the literal (digital) equivalent of having your own live performance - every time! You will have software based models of various instruments that will play music for you by playing their respective instruments for you real-time. The possibilities of this are actually astounding. You would record or store music not as digital samples (lossy, lossless, notwithstanding) but in terms of *how* each instrument is played. You have now turned the problem on its head - you are constrained by the accuracy of your software/mathematical model of each instrument, and by how well you are able to control it to become more nuanced. At a hardware level, if you assume infinite processing power, the challenge would be to accurately play these software instruments. You could again take a completely different approach - you could for example have an array of speakers where each speaker is dedicated to playing a specific instrument, and all the speakers are fed separate audio signals.
Contrast this to the currently audio setup - which would be a 2.0 or 2.1 or 5.1 or 7.1 stereo/HT setup - where each speaker tries (and fails) to accurately reproduce the entire audible frequency spectrum, or you have a mish-mash setup where different speakers divvy up the frequency spectrum between themselves (think sub-woofer and satellite speakers) so they can do a marginally better half-assed job.
If you look at the entire chain in a traditional setup, you have the speaker driver's mechanicals, the speaker crossover electronics, the speaker wire, the power amp, the pre-amp, the DAC, the player, the source audio signal (mp3, flac, redbook CD etc.), the recording mike, and the recording room - all of these links in the chain distort the music in their own way.
What I mentioned above is only my interpretation of how this technique can be used -there are a huge number of other possibilities - software defined objects, such as in games, can now have their own (genuine) sound, and that will sound different depending on how you interact with them. You could also have virtual instruments, unconstrained by the laws of physics, define their own physics and their own unique sound. You could even program room acoustics and have the instruments play sounds as if it was being played in open space, a large hall, a studio, on a beach etc.
Sigh.