Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Media Music

The Successor to AC'97: Intel High Definition Audio 428

An anonymous reader writes "A few days back Intel announced the name to its previously dubbed 'Azalia' next-generation audio specification due out by midyear, under royalty-free license terms. The Intel High Definition Audio solution will have increased bandwidth that allows for 192 kHz, 32-bit, multi-channel audio and uses Dolby Pro Logic IIx technology 'which delivers the most natural, seamless and immersing 7.1 surround listening experience from any native 2-channel source'. The architecture is designed on the same cost-sensitive principles as AC'97 and will allow for improved audio usage and stability."
This discussion has been archived. No new comments can be posted.

The Successor to AC'97: Intel High Definition Audio

Comments Filter:
  • by Anonymous Coward on Sunday January 18, 2004 @03:44PM (#8014693)
    Will it still also suffer from the same effects of background noise from the rest of the voltage going through the motherboard, or have they found a way to block that out also? 32/192 is fine as a standard... but it is still onboard sound. It needs some seperation from the motherboard to maintain a high S/N ratio
    • by UrGeek ( 577204 ) on Sunday January 18, 2004 @03:55PM (#8014775)
      Mmmm, what would really be nice if the DAC's were not on the sound chip but in a sheilded housing if it's own and then some nice connectors. And the sound chip would have that digital audio interface - i forgot what it is called - if it even supports something as insane as 32-bits/192kbps
    • If only there were some way to have a digital output from the computer, and do the D/A conversion in a dedicated box.

    • use a DAC out of your case

      just use digital out to a good A/V receiver
    • You're right... but keep in mind that most of the motherboards out there that give out lousy sound from onboard are due to poor layout from the manufacturers... who giving poor layouts because want to save money and physical space on the motherboard, at the expense of analog components like sound...

      more bits and more kHz are useless for onboard until you clean up the analog paths to the jack, and properly isolate the codecs on the motherboards using ground moats. Nothing worse then a company that routes a
      • There is no physical possibility of having *good* onboard audio. Even with all the above construction techniques, it's damn near impossible to completely isolate the prodigious amounts of digital noise that a typical computer produces.

        A much better idea is to run a digital link to an outboard DAC that has its own power supply and is outside the computer. That would actually give you extremely high quality audio, assuming the DAC box is properly designed.
        • If they were really serious about noise, they could use RF construction techniques and put the analog components in a shielded can on the motherboard, with bypass capacitors on the power/ground connections. You can shield anything if you are willing to spend some money.
        • by ethanms ( 319039 ) on Sunday January 18, 2004 @07:50PM (#8016288)
          I'm guessing you don't work in the industry... it's not only possible, but it's been done on many designs...

          Codec construction is important, for example two major suppliers from Taiwan: C-Media and Realtek, are both pretty much crap even on their high end parts... they've traded features and low BOM cost for audio quality...

          Other codec supplies, like Analog Devices & Sigmatel (or even Wolfson, Phillips, etc) have put audio quality as a priority to feature sets.

          Unfortunately if Realtek rolls out some new feature then the others need to follow or be left behind.

          Using ground layers properly, moats and keeping traces near the edge of the board... or even better, making sure you keep the codec as physically close to the jacks as possible, will yield very good results easily rivaling your average sound card.

          Let's also keep in mind that an AC'97 or "HD-Audio/Azalia" codec goes for between $0.50 and $1.25...

          Where-as a typical SoundBlaster will go from $50-200... they're able to use a lot higher grade support components, and since they are on a PCI card they're able better isolate from the rest of the motherboard (which speaks to your point...)

          As for digital out...

          Many motherboard manufacturers are finding that the masses are demanding SPDIF (digital) output from onboard sound, it's been available for the past several years from AC'97 vendors, even on most of the low end codecs, but adding the TOS (or even RCA) jacks cost too much in BOM and board real estate (surprise, surprise)...

          I think the next big requirement from users will be that SPDIF provide an AC3/DTS signal for all 4/6/8 channel audio. I'm surprised that this wasn't a requirement for Azalia, but we'll see what happens in the near future... After all, AC'97 is currently at version 2.3, there's room for change...

          Currently nearly all (even the $200 SB Audigy2) provide only PCM (2-ch) when playing non-DVD audio (when playing DVD they will all pass the AC3/DTS signal out, but they do not generate their own based on a multi-channel game or sound file).

          This is mainly due to the licensing fees from Dolby to encode AC3/DTS signals, and partly due to the processing overhead that would be required for implementation in soft-audio.

          The exception to this are boards equipped with the nVidia nForce2 audio, they build a DSP into the southbridge(ICH) that encodes AC3 out of any 4/6-ch source being played.
    • by j3110 ( 193209 ) <samterrell&gmail,com> on Sunday January 18, 2004 @06:36PM (#8015841) Homepage
      If they are going through that much work, I wouldn't be suprised if there wasn't a seperate card with the DAC that you put in a slot and run cables to. It's been done before, just not for this purpose.

      That said, I actually think 32bit audio may be at least 8 bits overkill. I'm all for 192Khz, because we can actually hear a difference in the resolution of the wave. 16bit audio allowed for 64K levels that were smoothed between. Most audio is pretty smooth sounding, and I doubt you can hear any difference between 16 and 32 bit unless you crank the volumn up to a level that could damage your hearing.

      Also, 32bit DACs are practically impossible to buy last time I checked. A full 16bit DAC is pretty expensive relatively and it's exponentially more complicated with each bit to build a proper DAC. I'm expecting a lot of shortcuts. A 32bit ADC for recording is prohibitively expensive, so I gaurantee you won't be doing any 32 bit recording any time soon on a PC.

      Basically, the 32bit idea is dead in the water. The machine will be long gone before any audio is distributed that takes advantage of it. You probably can't use it for mixing because you probably won't be able to record at 32bit. It's also going to be more expensive in components. Speakers aren't going to be accurate enough to 32bits of resolution. They may shoot for 24 bit, because you can get an OK DAC and ADC for working with 24 bits, but it'll still cost.

      The 192Khz thing is awesome. Right now, you can get 48Khz out of some consumer cards, but 192 would be excellent. Maybe we'll get digital audio up to proffesional quality some day. Right now if you go get a recording from a studio, you get tape (unless you can't afford it). All professional audio equipment is not only analog end-to-end, it's also usually tube based. The average transistor is pure sewage, and even MOSFETs are lacking. There's gotta be a lot more R&D into just transistors before we have professional grade audio going anywhere near digital. This is still going to be helpful to the end user that likes music, but we are still a long way off from having no audible differences. Amazingly enough, I think speaker technology has advanced more over the last decade than digital audio.
    • It is still onboard sound

      Not necessarily. The specification can be used for PCI cards as well, and in fact AC97 is used on some lower-end audio cards. It's more of a specification for minimum supported features and other specs.

      The fact that it is on-board in itself doesn't mean it is bad. It's all in the implementation. With proper design techniques (ground-loop isolation, etc) you can get quite a good S/N ratio. It doesn't need "separation from the motherboard", rather, it needs a buffered power bus, se
  • OSS drivers? (Score:5, Interesting)

    by cyb97 ( 520582 ) * <cyb97@noxtension.com> on Sunday January 18, 2004 @03:44PM (#8014695) Homepage Journal
    Does the royalty free license also imply that we'll see good opensource drivers for a plethora of platforms?
    • Re:OSS drivers? (Score:5, Informative)

      by dreamchaser ( 49529 ) on Sunday January 18, 2004 @03:46PM (#8014712) Homepage Journal
      Not necessarily. It's still up to the hardware manufacturers to implement it on their hardware, and then either provide drivers for said hardware or publish their specs as well.
      • by bstadil ( 7110 ) on Sunday January 18, 2004 @03:55PM (#8014778) Homepage
        Any idea what it would take to use this as an opportunuty to establish a sort of Azalia Certified for Linux Logo and a set of requirements that goes with it?

        Logo that you could stick on the box and "Journalists" et al could include in the normal fluffy Buzz Word compliance reviews.

    • Re:OSS drivers? (Score:5, Insightful)

      by Clockwurk ( 577966 ) on Sunday January 18, 2004 @03:48PM (#8014731) Homepage
      It depends on how nice intel is feeling. Royalty free doesn't mean that intel doesn't control it. Royalty free only implies free as in beer, not free as in speech.
      • Re:OSS drivers? (Score:5, Insightful)

        by ctr2sprt ( 574731 ) on Sunday January 18, 2004 @04:56PM (#8015177)
        Hardware is a fundamentally different beast than software. Software can be copied and modified easily once the initial version has been created. Hardware, on the other hand, continues to bear an associated cost per-copy even after the initial development is finished. Because of the nature of the medium, after-the-fact modifications are extraordinarily difficult. So it's not really valid to compare hardware licensing to software licensing, at least not using the oversimplified "free as in beer/speech" simile.

        In any event, if Intel are letting groups take their spec and implement it in hardware that's meant to be sold for profit... It doesn't get much freer than that. "Free as in speech" doesn't mean you have to give away the farm. You're allowed to keep certain rights for yourself, and make certain restrictions on use, just like open source software does. (And just like there are for free speech, in fact.)

    • Re: (Score:3, Interesting)

      Comment removed based on user account deletion
  • Initial reaction (Score:5, Insightful)

    by Firehawke ( 50498 ) on Sunday January 18, 2004 @03:46PM (#8014710) Journal
    The very first thing I thought when I saw the article itself was, "Please don't let this be as bad as AC'97."

    Don't get me wrong, AC97 is cheap, but it really dragged on the CPUs of the timeframe it came out. This one looks like it might be a shot at the Creative Labs end of the market, but with cheaper components (meaning most likely CPU-based)

    I'm sure it'll be on pretty much every board before too long-- well, the non-nForce ones, anyway.
    • Agreed, AC97 is a POS. Every computer I've ever seen someone is using it on the driver implementation and quality is pure shit. Just spend the $50 and get an Audigy card.
      • Could someone please explain exactly what is wrong with AC97? How could the quality be affected if I'm using the SPDIF out? (And why would you complain about quality if you're not?)
        • Re:Initial reaction (Score:2, Informative)

          by Anonymous Coward
          Because AC97 resamples everything internally to 48kHz, including 48kHz streams, so it auto-mangles everything you put through it. If that wasn't enough the Windows sound system (many are afflicted by such voodoo) resamples *everything* through its mixer further mangling the sound before AC97.

          Unfortunately SPDIF is not bit-perfect by no means, you need ASIO for that. An easy way to tell is to play a Dolby Digital or DTS .wav through a board, if it arrives at the AV reciever unaffected then the computer isn'
    • Re:Initial reaction (Score:3, Interesting)

      by dnoyeb ( 547705 )
      Yea, integratedness has fallen out of favor with me. At least those things that are human detectable such as audio and video.

      Integrated sound thus far has been a bad failure. It works well if nothing else is taxing the CPU, but otherwise, it can stutter. My nforce stutters when the network is active so no playing mp3s located on my Linux share...
      • by Anonymous Coward
        Well, that's the fault of your cheap'n'nasty Nforce chipset, not integrated sound per se.
        I've built any number of PCs (all Intel-based) over the last 3 years or so with AC'97 onboard audio, and have never noticed the audio "stutter" under any kind of load.
        Sorry, but that's the truth. Don't blame AC'97 just because your particular implementation of it is sucky..
        • Agreed. I've installed plenty of broadcast machines using embedded audio (usually ACL650) and never had a fault report about skipping audio. For office and most home use - all those systems with $20 Zoltrix speakers - it's fine.
    • Agreed, but I thought of their video cards.

      Like the AC'97, the vid cards were "functional", but just barely. Heck, even compared to the old Rage 128, it was shameful, IIRC.

      Tho, I'd rather have the ati rage in a server, and no sound, nor the intel video cards.

      Don't get me wrong, integrated sound/vid/net/whatever is ok, but I agree, it has to be at least of some quality, resource friendly, and stable (like the rage vid cards).

      Oh, and just in case it is not the case, being able to disable it is always a m
      • Having built an Nforce based PC (and supported it) I can safely say that nforce pcs are a joy to work with.

        You get good drivers and you only need to install one driver (that covers network, sound, chipset, and graphics). The audio is pretty good quality, and the integrated graphics aren't bad.

        I would definately go with an Nforce (for an AMD platform) even if I didn't use any of the integrated components. Nvidia makes excellent chipsets and I don't have to deal with VIA.
    • by Weaselmancer ( 533834 ) on Sunday January 18, 2004 @05:48PM (#8015509)

      Don't get me wrong, AC97 is cheap, but it really dragged on the CPUs of the timeframe it came out.

      Well, that's not really AC97's fault.

      AC97 is really nothing more than a 5 wire signal specification. It has more to do with voltages and waveforms on wires. And a register set in the codec that the wires are talking to.

      But that's the idea of AC97 - you don't need to know who made the codec, only that it's AC97. Then it's a drop in replacement, pretty much.

      But controllers - everybody and their brother has a different idea how to talk to an AC97 codec. And it's the controller that determines the performance. Are you bit banging your codec? Then performance will suck. Are you using interrupts? Performance will improve. Using DMA? Performance will improve again. Does your DMA engine suck? Performance will drop.

      If you're having a drag on your cpu due to audio, it isn't AC97 that's at fault. It's someone's lousy idea for a controller. AC97 is a spec, not a gadget.

      Weaselmancer

  • by ten000hzlegend ( 742909 ) <ten000hzlegend@hotmail.com> on Sunday January 18, 2004 @03:47PM (#8014722) Journal
    True progress from Intel, strange but true

    This new system for audio managment is great news for portable devices such as DVD+screen, next-gen PDA devices and even handheld game systems *Gameboy Advance II or PSP?*

    I've long been following PC related audio solutions, all the way from Sonarc to the latest 5 and 6 channel set-ups, my normal set-up is bass speaker, left / right and one for routing system alerts etc... this kind of announcement coupled along with the latest cards supporting the new Dolby processing solutions could well make me upgrade

    More to post...
  • by IGnatius T Foobar ( 4328 ) on Sunday January 18, 2004 @03:48PM (#8014725) Homepage Journal
    On its face this is a great announcement, but we must have all the usual concerns. Will it work in Linux? Are the hardware API's going to be published, so someone can write Linux drivers? Or is this going to be the next Centrino, needlessly obfuscated to give Intel's friends in Redmond yet another unfair advantage?

    I'm also concerned that a new audio hardware API may introduce way too many opportunities for things like Digital Restrictions Management. Long term, doing that is of course futile because someone will find a way around it, but that doesn't stop some hardware makers from setting out the legal minefield anyway.

    It's a sad state of affairs when politics and litigation are at the forefront of geeks' minds when technology ought to be.
  • by UrGeek ( 577204 ) on Sunday January 18, 2004 @03:48PM (#8014729)
    32-bit audio at 192kbps? Why not just stick with 24bit at 96kbps - it is good enough for most studios. And actually 16-bit at 44.1kbps is the most that these old ears are gonna hear anyway - if even that well after sitting front for Jimi Hendrix.
    • by ten000hzlegend ( 742909 ) <ten000hzlegend@hotmail.com> on Sunday January 18, 2004 @03:53PM (#8014759) Journal
      With modern audio requirements, getting as close to the fidelity of the original is the "flavour of the month"

      Last year, Pink Floyd released Dark Side on SACD, 24-bit audio at 48khz / 96khz, the amount of clarity over a CD, once the benchmark, was remarkable, I attended a launch party at was blown away even in a relatively acoustic poor setting

      I for one welcome consumer 32-bit audio
      • Last year, Pink Floyd released Dark Side on SACD, 24-bit audio at 48khz / 96khz, the amount of clarity over a CD, once the benchmark, was remarkable, I attended a launch party at was blown away even in a relatively acoustic poor setting

        How much of that clarity was due to the excellent sound engineers they probably hired? How much was due to the stage setup, and the excellent speakers and amplifiers they probably had? How did you compare the clarity over a CD? If they offered a comparison, how do you know
        • True, we handed Gary Wright who was announcing the various specifications of SACD at the time of play, a 1984 Dark Side CD, a 1993 20th anniversary CD and finally a copy of Echoes which had the latest digital master before the 30th anniversary re-master

          Clean, no scratches and if I recall, the Japan import 1984 cd was worth a mint

          Anyhow... we played each one and came to the result that the 2 channel 30th anniversary remaster was far superior, even on a great system, and the surround mix was simply amazing
          • by JebusIsLord ( 566856 ) on Sunday January 18, 2004 @04:34PM (#8015029)
            In double-blind tests, people have been unable to tell the difference between the SACD layer of the new release and the 1992 CD remaster. The cd-layer on the 30th anniversary version is needlessly overcompressed, probably just to make it sound different than the SACD layer. Try it double-blind, you'd be surprised at how much placebo comes into effect.
      • by bcrowell ( 177657 ) on Sunday January 18, 2004 @04:16PM (#8014921) Homepage
        Last year, Pink Floyd released Dark Side on SACD, 24-bit audio at 48khz / 96khz, the amount of clarity over a CD, once the benchmark, was remarkable, I attended a launch party at was blown away even in a relatively acoustic poor setting
        I think you're deluding yourself. Audiophiles make a lot of claims that they can hear certain things, but they never test their own claims using double-blind studies in which the other variables are all controlled for.

        I teach a physics lab class, and in one of the labs, I have students test their own hearing, to see the highest and lowest frequencie they can hear. There's some individual variation, but basically the top end of everyone's range comes out to be no less than 10 kHz, and no more than 20 kHz. I have never had a single student who could hear frequencies above 20 kHz.

        The 44 kHz (IIRC) sampling frequency of a CD means that you can actually record signals with frequencies as high has 22 kHz (half the sampling frequency -- that's a methematical theorem about the discrete Fourier transform). The reason they designed CD audio around that figure was exactly because of the limits of human hearing.

        Even if there was a hypothetical human who could hear 30 kHz, there would be many other things preventing it from being useful musically. For instance, your tweeters most likely can't respond well to those frequencies. Furthermore, the music might sound worse to such a person if the 30 kHz stuff was left in. The musician couldn't hear it, and therefore couldn't adjust his tone to make it sound good. The audio engineer also couldn't hear it, and therefore couldn't judge whether it sounded good or not.

        Another practical issue is that distortion will always introduce high-frequency harmonics, so that even if you could hear those frequencies, a lot of what you were hearing would probably be spurious stuff coming from distortion.

        People who really want to hear good stereo sound should spend their effort on the two things that will make a lot of difference: (1) getting good speakers, and (2) working on the acoustics of the room, the placement of the speakers in the room, and the placement of their own head in the room. Note that all the stuff under #2 is free or cheap. The audio industry would rather have you waste your money on stuff that's expensive, which is why they promote expensive, superstitious ways of improving sound, such as gold monster cable.

        • "The 44 kHz (IIRC) sampling frequency of a CD means that you can actually record signals with frequencies as high has 22 kHz (half the sampling frequency -- that's a methematical theorem about the discrete Fourier transform). The reason they designed CD audio around that figure was exactly because of the limits of human hearing."

          You are referring to the Nyquist criterion, which states that in order to guarantee you are not losing analog signal information you must sample your source at twice the frequen
        • by theLOUDroom ( 556455 ) on Sunday January 18, 2004 @05:16PM (#8015314)
          The 44 kHz (IIRC) sampling frequency of a CD means that you can actually record signals with frequencies as high has 22 kHz (half the sampling frequency -- that's a methematical theorem about the discrete Fourier transform).

          Yep, you're denfinately a physics teacher, not an EE.

          44 KHz sampling rate only lets you record frequencies up to 22KHz if you had a PERFECT d/a convertor and a PERFECT filter. It is provably impossible to implement a perfect filter. (One with a perfect cutoff and a perfectly flat passband.) Sampling at 44 KHz allows someone to design a decent recording setup with compenents that actually exist. Sampling at 96KHz gives the engineer even more breathing room when designing the filter in front of the A/D convertor. Instead of going from H(jw)=1 to H(jw)=0 in the space of 2KHz, he now can do it in 20. This means he can use a filter design with a flatter pass band. This means there is less distortion of all those frequencies that you can actually hear.

          Even if there was a hypothetical human who could hear 30 kHz, there would be many other things preventing it from being useful musically. For instance, your tweeters most likely can't respond well to those frequencies. Furthermore, the music might sound worse to such a person if the 30 kHz stuff was left in.

          Actually, it's much easier to build a tweeter than can handle 30KHz, than it is to build a subwoofer that can handle 20Hz. There are plenty of tweeters on the market right now which claim to work at 30KHz.
          Second, your statement about the 30KHz stuff making the music sound worse doesn't make any sense. The goal of an audiophile-quality setup is to reproduce the original audio exactly. We're not talking about adding in some strange 30KHz waveform, we're talking about preserving the signals that were there in the first place.

          People who really want to hear good stereo sound should spend their effort on the two things that will make a lot of difference: (1) getting good speakers, and (2) working on the acoustics of the room, the placement of the speakers in the room, and the placement of their own head in the room. Note that all the stuff under #2 is free or cheap.

          Actually, they should buy a good pair of headphones. For $300 they can buy a pair of headphones that would be tough to beat with speakers at 10X the price.
        • I am not an audiophile but I will note these things:

          The Nyquist theory is an absolute best-case, and assumed that you sampled at the peaks.

          Even with four samples per wavelength you can get pretty weird looking sample data. IIRC, EEs try to get at least eight samples per shortest wavelength to get a decent waveform representations, less than that and you can get some noticable potential frequency and phase shifting errors. On CD audio, that makes it a little over 5kHz.
        • Wrong wrong wrong... You're assuming the POINT of sampling at higher frequencies is to get a larger frequency response -- its not. It's to REDUCE QUANTIZATION ERRORS and NOISE, and increase DYNAMIC RANGE (the real measure of a sound card).

          Quantization errors occur in the less signifigant bits, a high quality ADC will have an uncerainty of about + or - 4 bits. Think of a 10khz signal on the edge of human hearing like a nice china boy cymbal -- a cycle of a 10khz audio signal will be represented by about 4.41 samples :) I know the nyqist limit/shannons theorom says thats enough, but out here in the real world where there's noise and quantization errors its not enough, which leads me to my next point **the nyquist limit is valid only for situations where there is no noise** in other words: THERE IS NO SITUATION FOR WHICH THE NYQUIST LIMIT IS VALID. The Nyquist limit is at best, a guideline.

          So now the reason you need higher resolution/bigger samples is because that alters the noise floor. + or - 4 bits in a 24 bit recording is alot less signifigant then + or - 4 bits in a 16 bit recording. Also, imagine at 192khz your 10khz signal is now represented by 19.2 samples -- error and noise is MUCH less destructive with more samples.

          I deal with these issues every day in my studio, and the rule with audio is pretty much always, more is better. However, There is a point of diminishing returns -- and IMHO I think that point is 24bit/96khz. It is very difficult to distinguish a 96khz signal from a 192khz signal.

      • The problem is that at 24 bits per channel, it is impossible to fully realize that sort of dynamic range with physical objects.

        The extra eight bits to get to 32 bits is simply a waste. The best I can think of is steganography where you can hide data in the least significant byte and few would catch on unless the data was carefully analyzed.
      • Well, considering that you specify the acoustic setting as "relatively poor", I would doubt the difference between SACD and CD would not be drowned out by the background noise.
    • exactly. that sampling rate is simply overkill. take a look at an application [wolfram.com] of the nyquist sampling theorem. human hearing maxes out around 20kHz. 44.1kHz is plenty (and with some breathing room) to sample stuff that humans can hear.

      now, the increased resolution offered by 24 bits of accuracy per sample could help. but increasing the sampling rate beyond 44.1kHz does nothing: "No information is lost if a signal is sampled at the Nyquist frequency, and no additional information is gained by sampling
    • The theory for high sample rates (AIUI) is that they allow much gentler filtering, giving less distortion in the audible range.

      Standard CDs are sampled at 44.1Khz, so the highest frequency they could possibly store is a sound at 22.05kHz. However, this doesn't meant that they will reproduce anything less than that with perfect accuracy. Firstly, the sound needs to be filtered to prevent anything over 22.05kHz hitting the convertors (as they'd cause very nasty artefacts); this filtering has a lower cut-o

  • by Anonymous Coward
    So, I think I'll wait for 42.1 with 0Hz to 1GHz (+/- 0.0000001%) bandwidth and 256 bits samples audio hardware, which shouldn't be to far away :o)

  • by xankar ( 710025 ) on Sunday January 18, 2004 @03:53PM (#8014760) Journal
    Hear hear!

    Pun completely and totally intended.
  • by Bubba ( 11258 )
    At least they are changing an old standard that has had mixed issues for several years. New input on old (possibly failed in some aspects) standards is always good for sales.
  • That's great! .. (Score:4, Interesting)

    by ShadeARG ( 306487 ) on Sunday January 18, 2004 @03:55PM (#8014779)
    .. but when will we see high definition video support with component and dvi i/o?
    • Even a $50 video card has DVI these days and quite a few cards have component adaptors. Sometimes it takes a bit of fiddling with Powerstrip to convince the card to output weird resolutions, but it's not impossible.
  • The Intel High Definition Audio solution will have increased bandwidth that allows for 192 kHz,

    192 kilo-Hertz? that's more longwave radio than audio. Hell, it's like 5 times the frequency of ultrasounds. Who are they kidding? This smells of marketting bull, or deceptive commercial practices targetted at trendy audio posers ...
    • I gather that with 48khz there are ikky problematic sounds if you forget to filter out high frequecies that reach all the way down into the audible domain - 196khz ensures that these artifacts will be well out of the range of hearing and the abilities of most equipment to reproduce.
    • Well, actually, 192KHz is the sampling rate.
      Even if frequencies that high cannot be heard, using such a sampling rate will decrease the noise added by analog->digital conversion.
    • Re:That's audio ? (Score:3, Informative)

      by admbws ( 600017 )
      192khz refers the the sample rate, how many times per second the sound is sampled, not how many cycles per second. While theoretically, 192khz sample rate does allow frequencies higher than the ear can hear to be recorded, its real purpose is to make the lower frequencies more accurate - for example, a 22050hz sine tone (if you can hear that high!) sampled at 44100hz is only sampled twice per cycle, and would effectively be recorded as a square wave (although, admittedly at that frequency you'd need to be a
      • Re:That's audio ? (Score:5, Informative)

        by Anonymous Coward on Sunday January 18, 2004 @04:31PM (#8015009)

        for example, a 22050hz sine tone (if you can hear that high!) sampled at 44100hz is only sampled twice per cycle, and would effectively be recorded as a square wave (although, admittedly at that frequency you'd need to be a dog to tell the difference!)


        This is completely and utterly wrong. I hear this very often though.

        At 44100Hz sampling, a 22050Hz signal will be reconstructed as a 22050Hz SINE WAVE. The reconstruction of sampled signals is not as simple as you think it is. This is covered in any elementary DSP book.

        With IDEAL equipment sampling at frequency N allows perfect reconstruction of all frequencies N/2 in all cases. The rather = comes about because of the potential of sampling the frequency N/2 at the zero crossings. However, if only two nonzero points are sampled of the N/2 component, it can be reconstructed perfectly.

        Using a higher sampling rate has to do more with counteracting clock jitter and the error introduced by non ideal equipment.
    • Re:That's audio ? (Score:2, Informative)

      by bbbl67 ( 590473 )
      I don't really think they mean 192 kiloHertz but 192 kilobits per second. There is a difference in the case of lossy-compressed audio. The higher the bps, the less lossy the quality of the audio is. And this bitrate also includes all of the channels together, not just one channel.
      • i don't think so. if they mean 192kbps, then this is a huge step _down_ from 48kHz, 16 bit audio. while something on the order of a 192kbit mp3 is fine by my tastes, it is a huge reduction in quality from, say, a 44.1kHz, 16bit pcm stream from a CD. you just cannot (with current algorithms) losslessly compress an audio bitstream down to 192kbps without losing a good measure of quality. for reference, a pcm CD audio stream runs you around 700kbps. now, things like FLAC [sourceforge.net] can drop this number a bit while r
      • completely wrong.
    • Re:That's audio ? (Score:3, Interesting)

      by DarrylM ( 170047 )
      192 kilo-Hertz? that's more longwave radio than audio. Hell, it's like 5 times the frequency of ultrasounds.

      Yeah, that is pretty high, but it will allow for a flatter frequency response in the human hearing range than what is possible with 44.1kHz or 96kHz. The reason is that the sampling process has a frequency response of a sync function: sin(x) / x. At a sampling rate of 44.1kHz, the amplitude response of the sample at the high end of the human hearing range will be a fair bit lower than at the low
  • by SpookyFish ( 195418 ) on Sunday January 18, 2004 @04:03PM (#8014843)

    This sounds like it could be more smoke and mirrors, though there really isn't enough information to be sure.

    ProLogic IIx will "synthesize" multiple channels from a stereo or 5.1 source. I sincerely hope Intel isn't thinking "we can do the same old thing (stereo) and marketing folks can call it 7.1 multichannel because we put this Dolby fake surround processing in the chip!"

    Despite how much ProLogic has advanced, it still doesn't hold a candle to true, *discrete* 6+ channel sound (like DD/AC3 or DTS).
  • DSD Support? (Score:2, Interesting)

    by babymac ( 312364 )
    When will we see support for the DSD audio format in computer hardware? I have yet to hear this technology for myself, but friends who have heard it say it's incredible. Like analog, only better. The one bit tech behind it is very compelling...
  • I play all my music from WAVs on my HD, but I don't sacrifice quality for money. The highest-quality DAE from CD to HD (using CDParanoia) gives the same quality as thousands of dollars worth of separate CD transport and data equipment. Then I (losslessly) compress them with Shorten (2:1) to save some money on storage. I often bypass my Onkyo amplifier and KLH speakers to listen with my Sennheiser 600 headphones - all hi-end audio gear. But the bottleneck is the soundcard. Soundblaster Audigy 2 seems really
    • You have *all playthrough inputs on your sound card muted* and you're still getting audible noise?

      I have an emu10k1-based card. Turns out that the OSS/Free drivers (one of the four free drivers available for this card -- there's the OSS/Free drivers, the native kernel driver, Creative's driver (which may be an adaptation of the native kernel driver, not sure), and ALSA. I started using OSS/Free, since Red Hat had defaulted to using OSS/Free with my old card. I kept getting noise -- sort of a buzzing --
  • memory requirements (Score:4, Interesting)

    by Saville ( 734690 ) on Sunday January 18, 2004 @04:28PM (#8014995)
    Since you can fit ~80minutes of music on a ~700meg CD you have ~146K/sec for your music. That is at 16bit 44.1KHz stereo songs. Now audio data will take 8.7 times as much memory if recorded in stereo, but if recorded with eight (7.1) channels each song will take almost 35x as much memory thanks to the higher sampling rate and the use of 32bit values instead of 16bit. That is 5.08 megs/sec for your audio.

    I like that this standard is very future proof, but when can we use it? Already CD sound is good enough for all but maybe 10,000 people on the planet. Most people's audio experience is probaby limited by their audio hardware, not the source sound. Hey, most people are quite happy encoding their mp3s at 128k!

    Where will the high quality sound data come from? Audio CDs are still going to be 16bit, stereo, 44KHz. DVDs have compressed audio. Almost all video games use compressed audio of some sort too because we don't have enough memory yet for even CD quality sound.

    I love that it is 7.1 and that it is very future proof, but other than making 7.1 standard it seems to be a standard for marketing to use as an advantage, not something consumers will ever use (by the time they can use it they'll have upgraded anyway). It seems that this beyond CD quality audio is just included because they can and we'll never see it in use this decade :)

    Better to overbuild than underbuild I guess. But I'm not excited about this promise of higher quality audio.
  • 7.1? (Score:3, Interesting)

    by Cyno01 ( 573917 ) <Cyno01@hotmail.com> on Sunday January 18, 2004 @04:33PM (#8015020) Homepage
    I had this discussion the other day with some friends, none of us are audiophiles, but we all have decent setups. I have 4ch surround for my entertainment center and 4.1 for my sterio in my bedroom, but we all understand that the 5th is a front center, and we all assume, but none of us know that 6.1 has a rear center chanel. But none of us could figure out the arrangment of 7.1 surround. Is there an overhead speaker or no front center speaker and 4 evenly spaced in front. Can anyone shed some light on this?
    • Well, the picture [pocket-lint.co.uk] I found looks something like a modified 5.1 arrangement. You've still got the three front speakers, two back speakers, and subwoofer, but you also get two true side speakers for a total of seven. I guess this gives you a more distinct frontleft and frontright audio angle, but I doubt I could really hear the difference.
    • I think 7.1 is something like:

      .1
      1 2 3
      4 5
      6 7
    • Re:7.1? (Score:5, Informative)

      by Rufus211 ( 221883 ) <rufus-slashdotNO@SPAMhackish.org> on Sunday January 18, 2004 @04:49PM (#8015123) Homepage
      Quick google found this review [pantherproducts.co.uk] that includes nice pictures.

      4.1: Front Left, Right; Mid Left, Right
      5.1: Front Left, Right, Center; Mid Left, Right
      6.1: Front Left, Right, Center; Mid Left, Right; Back Center
      7.1: Front Left, Right, Center; Mid Left, Right; Back Left, Right

      I always thought the mids ended up being farther back than shown in the picture though.
    • 6.1 is 5.1 plus a rear center speaker, as you guessed. IIRC 7.1 is 5.1 plus two side speakers.
    • Re:7.1? (Score:3, Informative)

      by EulerX07 ( 314098 )
      Check it out at dolby [custhelp.com].

      It's basically : Left, Center, Right; SurroundX(left,rear left, rear right, right). Total overkill IMHO, 5.1 is good enough for me.
    • Re:7.1? (Score:3, Informative)

      by geirt ( 55254 )

      In the movie world, a 7.1 audio mix usually means a 5.1 surround mix plus a conventional 2 channel stereo mix. You can synthesis a conventional stereo mix from a 5.1 surround mix, but the result may vary. That is why some movies are mixed in 7.1, which really is both a 5.1 and a stereo mix.

      When the movie is distributed on DVD or used in cinemas they use the 5.1. When the movie is sent on TV (eg. PAL with NICAM), you get the stereo mix.

  • by codifus ( 692621 ) on Sunday January 18, 2004 @04:45PM (#8015101)
    First off, 32 bit, 192 Khz, wants to appeal to those very serious about audio. 32 bit cards can have a dynamic range ratio of 144 db. That's beyond what normal humans can dfifferentiate, which is 120 db if we're lucky. Not only that, but professional 24 bit cards far exceed the needs and capabilities of most , if not every, user, with aaround 110 db of dynamic range. And they're going to put this mega high tech onboard? Hmm. 2ndly, the inclusion of Dolby. This is to appeal to the movie guys, but the real serious audio guys know that Dolby encoded audio is like an MP3, lossy compression. Serious audio guys will frown on that aspect. Incorporating these 2 aspects seems somewhat contradictory, which marketers always tend to do when trying to appeal to everyone. I, for one, remain highly skeptical. CD
  • by BrookHarty ( 9119 ) on Sunday January 18, 2004 @04:48PM (#8015117) Journal
    But isn't Dolby Pro Logic IIx for creating natural surround from stereo for music/movies while EAX allows game developers to create surround sound reflections for 3d enviroments?

    And Creative has breakout boxes, multiple inputs, surround emulation software, accelerated audio, EAX# and A3D compatible, support for most games, etc. (And DRM)

    I don't see this killing off creative, but will hurt its marketshare from non-gamers.

    On the flip side, Creative labs have been quite stale, only minor updates to its Audio card line. They have been adding many other products [creative.com], they even have mini-pc's, gfx, burners, mice, keyboards, etc..
    -
    Secondlife [secondlife.com]
  • by midifarm ( 666278 ) on Sunday January 18, 2004 @05:12PM (#8015287) Homepage
    I mean seriously... Professional recording studios at most record in 24-bit 192kHz. So where would this 32-bit recording come from? Hasn't most of the world been dumbed down to where MP3's sound good or at least good enough? I don't know too many people with a sound system worthy of playing anything 32-bit. Besides what is the point of it all?

    The hottest selling gadget of the "music" world is the MP3 player and the seemingly hottest article of contention is the online music store. None of these are even close to being prepared for 32-bit let alone the sizes of the files necessary to create such a file.

    There are a lot of comments about 6.1 and 7.1 CD's or recordings and it's all rather silly. There's no real precident of a true recording done in surround. Would you really want the lead guitar only coming from the left rear channel? The only time that I would think that it would be cool would be at a live performance, but as far as I know no one has really done anything like this.

    So were looking at several GB of needless information to recreate a CD with most likely marginal musical worth, and Intel is leading the charge? I think they're looking at their dwindling x86 market share (AMD is on the upswing, not pushing my Mac-centric views out there) and trying to find a niche by using it's brand recognition. I think Dolby and DTS will have more to say as to whether this proposed solution will have any legs.

    Remember most of the manufacturers and broadcasters still haven't totally agreed upon an officially acceptable HD format! DVD took too long. CD was all Sony, but took long enough for acceptance. Where is this leaving the consumer? A 32-bit 192kHz audio card in their computer, decoding 7.1 channels of information so they can play video games using samples that have been resampled from their original 16 or 8-bit formats.

    I think the word is overkill and it's needless. Most people can't tell the difference and for those that can, I scoff at you. I've worked with some of the best audio engineers in the world and they wouldn't be able to hear the nuances you claim. There is "air" there in higher fidelity recordings, but most speakers can't play it back any way. Ah well, thoughts?

    Peace

  • by Animats ( 122034 ) on Sunday January 18, 2004 @05:52PM (#8015527) Homepage
    Then we'll have the labels compress everything so that it's up near the top of the scale anyway. "Nobody wants to be the softest CD in the changer". Most popular music is compressed so hard it's badly damaged.

    The main reason you need more than 16 bits is because, during soft passages, most of the high bits are zero and you may effectively have only six or four bit audio. Classical recordings that aren't compressed really do suffer from this problem.

    But really, the number of people who buy classical piano recordings is small.

    If the industry can agree that the reference level for popular audio is somewhere well below 100%, this could work out. But that won't happen.

  • Wow (Score:3, Interesting)

    by ajagci ( 737734 ) on Sunday January 18, 2004 @05:58PM (#8015557)
    The Intel High Definition Audio solution will have increased bandwidth that allows for 192 kHz, 32-bit, multi-channel audio

    This is so that my eight-eared mutant pet bat from outer space can finally have a full high-fidelity experience.

    For regular humans, of course, CD-quality audio is already overkill.
  • by havaloc ( 50551 ) * on Sunday January 18, 2004 @07:52PM (#8016297) Homepage
    I'm surprised no one has brought this up, but does it have any sort of DRM (Digital Rights) built in to it? If so, no thanks!

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...