Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:OpenAL? (Score 1) 82

Every electronic space/reverb algorithm I've heard just adds distortion and makes the original sound worse.

Well, by definition any signal-dependent component a process adds to an original signal is distortion. :) I just don't think you've heard good ones. Also, part of doing good music mixing is using reverb in a way that people don't notice, or just accept as natural. There are also applications of reverb that don't sound like reverb.

I prefer to hear just the original instruments in as pristine quality as possible as if we were in an infinite volume room.

I assure you, no recording of music you love has been recorded or presented in this way. Anechoic musical recordings exist but they're sorta special and they only work for certain instruments (OK for strings, awful for percussion, death for winds and vocals).

Comment Re:i don't get it..... (Score 1) 82

Despite the name "Neural Upmix", it is designed to work with phase-encoded signals intentionally mixed using Neural Downmix.

They sell it as doing both, it's marketed as a spatializing upmixer that can also decode Neural Surround (which is a third format not necessarily related to Neo:X). But this feature is sorta incidental, as literally nothing is mixed in Neural Surround.

I don't know what you mean by "DTS 11.1 is an actual format"... if you mean that it has 12 discrete channels, I believe you are mistaken on this point. .. "without changing the delivery chain": no new audio format, disks play fine on older DTS decoders

My understand is that the height channels are encoded sum-and-difference with the main L-R channels, and a special decoder reads reads additional channel data to subtract out the height channels from the mains. Auro 3D uses a similar method with it's high 5.0 array in order to do the same thing: make a deliverable that can be turned into a bare 5.1 just by dropping additional channels.

By "actual format" I mean its a communications channel where the sender and recipient agree on what goes into the channel and what is supposed to come out.

Comment Re:i don't get it..... (Score 1) 82

Before the 5.1 and 7.1 digital standards, there was Dolby Surround that was encoded within a stereo soundtrack. A simple audio mixer could "upmix" from stereo to surround. DTS Neural Upmix can make a very clean 7.1 from a stereo signal, and it works from an analog signal (it's not something tricky inside a digital encoded format).

There's a fundamental difference between an encoded mix and an upmixer. Dolby Surround is intended to be decoded from 2 tracks into LCRS, the filmmakers mixed the film in Dolby Stereo and were listening to the surrounds so they know what's in them. The phase encoding is part of the channel spec.

An upmixer takes a stereo or 5.1 mix and applies effects to it to make it sound like it was mixed in a wider format, but there's nothing really being decoded, it's just synthesizing or guessing what should be in the additional channels using heuristics, all-pass filters, delays, crossover networks and other stuff that sounds cool or "provide a good experience" but, in fact, interfere with the filmmaker's intent.

Neural Upmix is an upmixer, DTS Neo:X is an actual format that decodes an 11.1. Neo:X home receivers also employ upmixing, mainly because no films are mixed in 11.1 Neo:X, it's a surround audiophile format, and it needs to do an upmix in order to justify people spending money on it.

Comment Re:So, not really stereo (Score 1) 82

Not really anything regarding stereo, but how to digitally recreate a 3D space and provide the resultant acoustic signature to stereo headphones?

We can do this without any fancy computers, traditionally someone would make a binaural recording with a dummy head.

So, you could digitally model Carnegie Hall, or a warehouse, or a coffee shop, and if you know the locations of your point sources of audio you can then create what the room would sound like based on a given listener location and orientation?

It's not generally possible to do this from procedural models, because it turns out a space like Carnegie Hall has a lot of variables, but we can do the equivalent of LIDARing the space for its audio character by capturing an impulse response and creating a convolution reverb of the space. There isn't a commercially-available IR of Carnegie that I'm aware of but recording the aural character of a space is a pretty routine thing nowadays.

Comment Re:OpenAL? (Score 1) 82

the ogg file format has supported multiple streams pretty much since inception. Couple this with a bit of positional tagging information and you're done.

Yeah, but this thing isn't just positional tagging, it's 3D soundscape stuff. So you have to have a way of communicating to the receiver the kind of space the audio stream is in -- the size of it, the general shape, how reflective the surfaces are, diffusion, the position of the space relative to the source, etc. and then you have to rigorously define the reverb algorithms that will be applied to the source taking these into account. You also have to define equalization (and perhaps other LTI) functions for distance, and diffraction around obstacles.

Then, if better reverb and EQ spatialization algos are developed, how do you push these out? How do you handle legacy content that used the old algos? Do they get auto-upgraded or do they play in the old ones?

And then there's the HRTF business: you have to define the HRTFs that will be used, and under what conditions.

And the positioning itself has subtleties you have to address. Will sound sources be positioned relative to a central listener in spherical coordinates, or will it be positioned relative to a reference space with rectangular ones? How will in-phase content be handled when mixed to one speaker?

Comment Re:i don't get it..... (Score 3, Interesting) 82

3d audio = surround sound (5.1/7.1/8.1/etc)

"5.1/7.1/8.1" doesn't have an elevation component. Certain IMAX formats did, as did some experimental 70mm formats in the 70s, but it hasn't really been widely available before Dolby ATMOS and Barco Auro.

The big difference with the traditional X.Y formats is these regard individual screen channels as discrete, and when films are mixed, sound sources are hard-assigned to certain speaker channels, and the speaker placement has to be matched in every venue . "3D" systems use procedural methods to assign sound sources a vector or coordinate with metadata, and a decoder at the receiving end does the job of assigning speakers, which may have different placement and number from venue to venue.

Something mixed in 5.1 or 7.1 can be "downmixed" to stereo by summing channels together and applying pan and gain to position the multichannel sources in a stereo field. But a stereo signal can't really be "upmixed" to a 7.1, the position of individual sound sources is lost and can't really be extracted from the mix -- there are fancy ways of "spatializing" stereo mixes to 5.1 or 7.1 with fourier analysis and panning certain phase correlations or frequencies to different speakers, but there's really no way for a spatializer to split the celli from the violas and pan them separately, or the machine guns and the explosions.

3D audio formats keep violas and cellis on separate streams in the file, and then use position metadata to do the speaker mix in the receiver, so something mixed on stereo or 5.1 speakers could be unmixed to a 7.1, or 11.1, or 64 channel setup and you would actually get more fidelity.

Comment Re:Could be promising (Score 2) 82

This could be quite promising if incorporated into movies and video games.

There are already several platforms for object-based 3D audio in games, they already offer solutions for binaural and HRTF listening.

The AES has promulgated many standards with regard to file interchange and computer audio, they're always several years behind and chasing proprietary vendor technology that's already established (See AES31, a timeline interchange format supported by no one, even open source projects avoid it like the plague). In the end vendors have nothing to gain by adopting the AES standard.

On the videogame side there's OpenAL, X3d and a bunch of other platforms that build on these. Speaking as a film sound designer, 3D audio systems just don't offer the level of control I'd want: I don't want the user's cellphone applying my fucking reverbs and distance rolloffs for me, and nether do my clients. This is why there's Dolby ATMOS and the competing Barco-DTS standard which will probably be FRAND and offer down mixing modes which should preserve the experience on headphones, and don't leave things like equalization, or panning, or reverb to the interpretation of the platform or the host.

Comment Re: Aren't these already compromised cards? (Score 1) 269

I see, so it IS okay for Apple to strong arm banks into doing things Apple's way, provided Apple's way meets your standard. Funny that.

I mean obviously this is a foul up and both the banks and Apple should work to fix it, they're BOTH responsible. The idea that banks are just helpless ninnies at the mercy of Apple, forced to conduct their business exactly as Apple demands, is dumbass.

Comment Re: Aren't these already compromised cards? (Score 1) 269

Geez, if Apple told you to jump off a cliff, you have to, right? I mean they have "such a large war chest."

At a certain point surely the responsibility of bankers to keep their customers' accounts secure entails-- it's the very basis of their profession.

and anyway, what exactly are they afraid of? Did they even ask to implement the necessary security features? Did they ask, and did Apple refuse? Has Apple threatened any sort of sanctions against banks that don't comply? It's all very amorphous, and again, seems to rely on the idea that bankers have minimal accountability or responsibility, and may respond to undefined, mysterious, and unsubstantiated "fears" without basis.

Slashdot Top Deals

Make sure your code does nothing gracefully.

Working...