Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

2nd Multi-Format 128kbps Public Listening Test 316

technology is sexy writes "Roberto Amorim has launched his latest public listening test evaluating the performance of different audio codecs at 128kbps, among them Apple's AAC implementation (used in iTunes), LAME, Ogg Vorbis fork auTuV, WMA, Musepack and even Sony's Atrac3 format, which is soon to be used in their own music store. Read more on Hydrogenaudio and check out the results of prior tests. As opposed to most evaluations of audio codecs, this is a scientific test adhering to ITU-R BS.1116-1 as much as possible while still allowing everybody to participate."
This discussion has been archived. No new comments can be posted.

2nd Multi-Format 128kbps Public Listening Test

Comments Filter:
  • by Anonymous Coward on Thursday May 13, 2004 @04:51PM (#9144714)
    Never heard of it.
  • Ogg! (Score:4, Funny)

    by gekkotron ( 641131 ) on Thursday May 13, 2004 @04:52PM (#9144724) Journal
    Ogg, ogg ogg. Ogg oggity ogg ogg!

    Now that that's out of the way, let the insightful comments begin.
  • I know you can do frequency analysis on the output of these various codecs. Just compare that to the average human auditory capacity and you can get an objective measurement of the merits of these various compression methods.

    So uh, why is this necessary, exactly?
    • by trentblase ( 717954 ) on Thursday May 13, 2004 @05:00PM (#9144829)
      Because "human auditory capacity" is not fully understood. Sure we can give standard frequency response graph, but most of these codecs take advantage of psycho-accoustic hearing models -- where certain frequencies mask other frequencies in our perception. Since this is a developing field, objective listening tests could really help determine what's working and what's not.
      • Well, then do those tests in a controlled environment independent of crap like unreliable test environments and codec bias.
        • Or better yet, they could learn once and for all that asking people for opinions on how good something sounds to them does not result in quantifiable data, and go home early.

          No matter how you encode it, an opinion is an opinion, nothing more.
        • They do that with small groups, but the point of making this study public is to get a larger sample size without having to plunk down serious cash to set up a "reliable test environment" for thousands of listeners. Also what kind of codec bias could you possibly be referring to?
          • by badasscat ( 563442 ) <basscadet75@NoSpam.yahoo.com> on Friday May 14, 2004 @03:29AM (#9148875)
            Also what kind of codec bias could you possibly be referring to?

            Apparently he doesn't realize that this is a double-blind test - meaning neither the listener nor the tester knows what codec is being presented at any given time.

            I'm taking the test now (well, not right now, taking a break) and it's about as scientific as I think you could make a public test taken in the home. Yes, the samples get compressed and then put in easily accessible folders with proper file name extensions, but you never know what you're actually listening to when you're running the testing program. All you have is a source file for comparison, then two buttons marked "1" and "2", one of which is the source again, the other a randomized codec. You never know which of the two buttons is the uncompressed source and you also never know which codec you're hearing. The results are also encrypted, so it's not as if you can just go into the results files and look at what codecs you favor.

            I suppose someone who's truly got the Ear of the Gods could listen to the samples outside of the testing program, pick various identifiable traits out of each, then listen for those traits in the testing program and vote up or down whatever codecs he or she chose, but that would be exceedingly difficult and more than a little time-consuming. I can't see how it would be worth it, especially as no single test result is going to skew the overall results to any significant degree.

            This is the first time I've ever taken a test like this and I am honestly pretty shocked at how good all of these codecs sound. I am having a really hard time even deciding which is the compressed track most of the time, and I consider myself something of an audiophile. I'm even listening in a fairly controlled environment with a good pair of headphones, at a volume loud enough to hear any background noise clearly but below any clipping whatsoever. I will be surprised if any codec really does significantly better than the others consistently when we see the final test results.
      • by Woogiemonger ( 628172 ) on Thursday May 13, 2004 @05:06PM (#9144903)

        Because "human auditory capacity" is not fully understood. Sure we can give standard frequency response graph, but most of these codecs take advantage of psycho-accoustic hearing models -- where certain frequencies mask other frequencies in our perception. Since this is a developing field, objective listening tests could really help determine what's working and what's not.

        From my understanding of MP3 compression and others, the compression protocols take advantage of this frequency masking, so if humans can't hear it, it removes it. It also obviously takes into account frequency ranges of hearing. As a side note, I think it might be neat to be able to compress 30-50% better based on your personal hearing characteristics, but it'd stink if you got old and had to not only wear a hearing aid, but also start collecting MP3's all over again.

        • From my understanding of MP3 compression and others, the compression protocols take advantage of this frequency masking, so if humans can't hear it, it removes it.

          Ideally, yes, But codecs aren't perfect. Thus the need for testing.

          Ah well, in a few years, bandwidth, space, and proccesing power will be such that lossless compression will be the norm. Then, we can can argue over whether the recording engineers are competent, whether 16bit/44.1KHz is really enough to capture the subtleties, and if you real
    • by The Clockwork Troll ( 655321 ) on Thursday May 13, 2004 @05:02PM (#9144852) Journal
      That is a great idea in theory, however there is much debate on how psychoacoustics work, i.e. what information really "needs" to be there in music in order to be perceived.

      For example, conventional wisdom says that the human ear cannot detect sounds above roughly 20kHz, yet there is at least some anecdotal evidence that higher order harmonics shape what we hear.

      If "normal" human auditory capacity was a completely decoded topic, there wouldn't be nearly as much a need for different approaches to music compression (it would be a much simpler problem with fewer possible solutions)

    • by j3ll0 ( 777603 ) on Thursday May 13, 2004 @05:05PM (#9144881)

      Well I could be wrong, and forgive me if I've misinterpreted your post...but

      Don't all of these compression algorithms rely on psychacoustic modeling to remove 'extraneous' information from the bitstream?

      If that is correct, and the algorithms are implemented correctly, then really what we are looking for is the best perceived result.

      Just because the output meets the algorithm input->output specs, justn't mean it's the best output as perceived by humans.

      Maybe think of it as optimizing sort routines? Yep, bubble-sort or b-tree still output a sorted list, but the perceived value is that the b-tree is better because it performs it's function more quickly.

      This isn't an exercise in getting the frequencies algorithmically correct - the end result has to be listenable.

      Humans are analog devices...
    • by Anonymous Coward on Thursday May 13, 2004 @05:11PM (#9144958)
      The purpose of a "perceptual" encoder such as MP3 is to remove the frequencies one cannot perceive. The frequency graph therefore need not be the same as the original and yet the encoded version may not be distiguishable from the original.

      Also, a frequency plot tells us nothing about the phase or frequency distribution at certain times in the signal. I can make a sine sweep that would match exactly the spectrum of a pop song, but obviously would sound nothing like it.

      There are ways of objectively measuring the performance of perceptual encoders, but frequency analysis isn't really one of them.

    • by tashanna ( 409911 ) on Thursday May 13, 2004 @05:13PM (#9144992)
      Frequency analysis only gets you part way there. For those who didn't look around at the articles (I'm not refering to you, of course; just some hypothetical /. reader), there are time domain audio effects that are not visible on FFT plots. An example of this is pre-echo. With pre-echo you get a n echo of an upcoming sound (like a drum beat) before the actual sound happens. This can happen when linear-phase FIR filters are used, but is also an artifact of some frequency domain encoder/decoder systems. The FFT is only part of the story.
    • ..because your hearing doesn't work like that. the sound quality perceived can't be easily told from frequency graphs and so on (ever heard of the PWB effect [demon.co.uk]?)
    • The different formats don't simply limit the frequencies stored. A given compression format will change the sound in different ways depending on what input soundfile is. Some codecs perform well with some types of sounds, but poorly with others (for example, the compression your cell phone uses is good at speech but lousy at music).

      Also, all frequencies aren't of equal importance to a our ears. Our hearing is best in the middle range (near where the important elements of speech are), and taper off above
    • So uh, why is this necessary, exactly?

      hmm, the whole point of the "lossy" compression algorithms is to filter out information the human ear/brain is unable/unwilling to hear (psychoacoustics, ...). therefore just comparing the decoded signal with the original won't do, because the "subjectively" heard difference is what matters.

      and adhering to a certain norm and "scientific method" when comparing those codecs can't be bad...

      so what is it exactly that you find unneccesary??
    • I don't know about you, but I don't listen to my music on a spectrum analyser.
    • A good psychoacoustic encoder will generate spectral graphs that look nothing like the source, because they threw all sorts of inaudible stuff away. The tuning of audio based on visual information is flawed, yet strangely prevelant (like in the LAME --r3mix setting). We don't look at music, man! We hear it!
  • No matter *what* (Score:2, Insightful)

    by puargsss ( 731990 )
    128kbps doesn't cut it. It's an absolute lossy, disgusting bitrate, no matter what it's in. They should test similar file sizes instead of by bitrate, to determine whether something is good or not- this gives a better impression of quality vs size, instead of a purely comparison based test.
    • They should test similar file sizes instead of by bitrate

      Uhh, if they are comparing the same sample at the same bitrate, the files will be the same size. I'm not even going to respond to the other assertions... how is this possibly insightful?

    • by mrgreen4242 ( 759594 ) on Thursday May 13, 2004 @05:06PM (#9144894)
      128kbps doesn't cut it. It's an absolute lossy, disgusting bitrate, no matter what it's in. They should test similar file sizes instead of by bitrate, to determine whether something is good or not- this gives a better impression of quality vs size, instead of a purely comparison based test.

      Uh, if the sample is the same length, and the but rate is the same, won't the file size be the same as well? A 10 second sample at 128 Kb Per Second should be 1280Kb regardless of the format, no?

      And, just FYI, MOST people, something like 95% of listeners cannot tell the difference between 128kbps sample and the original. I generally can't, even with decent headphones on.

      I think that all you compression elitist snobs work for HD manufacturers, trying to get me to buy a 250GB drive to store the same amount of music as my 60GB will hold!

      • something like 95% of listeners cannot tell the difference between 128kbps sample and the original.

        Amen.

        I think that all you compression elitist snobs work for HD manufacturers, trying to get me to buy a 250GB drive to store the same amount of music as my 60GB will hold!

        No, I personally think that most of them (not all of course) are just experiencing the placebo effect...
      • something like 95% of listeners cannot tell the difference between 128kbps sample and the original

        From what orifice did you grab that stat? I think that you are seeing more and more 192kbps and even a large minority of 256kbps mp3's on file sharing networks is at odds with the statement. In fact there isn't a codec out there that performs better than 'decent' at 128kbps. To get true transparency most lossy algorithms need somewhere north of 200kbps VBR. LAME and Vorbis both do extremely well at those rate
    • by rsidd ( 6328 ) on Thursday May 13, 2004 @05:06PM (#9144901)
      a given audio stream, at a given bitrate, for a given length of time, always has the same filesize. What else do you think bitrate measures?

      BTW, I think the difference between MP3 and Vorbis at 128 kb/s is perfectly noticeable. MP3 sounds rather bad, vorbis sounds pretty good. And the point is precisely to tell which format sounds best, so you don't want to do 512 kb/s bitrate where all formats sound close to CD quality.

      • > a given audio stream, at a given bitrate, for a given length of time, always has the same filesize.

        Actually, for the test the MS codec is a VBR at 128 so the file size will not be the same.
    • Most of the codecs are 'optimized' for 128kbps. I remember reading somewhere that the WMA codec sucks above 128kbps. In addition, it's one of the most popular formats for distributing music (iTMS, etc...), and is the default setting on most encoders. Apple reccomends that all AAC files be encoded at 128kbps (although users are allowed to select any bitrate they wish with a few clicks of the mouse)
    • by Jugalator ( 259273 ) on Thursday May 13, 2004 @05:11PM (#9144965) Journal
      No matter *what*?

      Not even if it's about average quality speakers?
      Not even if it's about some rather cheap speakers?

      I can't say I hear much of a difference with modern codecs, and I own some average speakers. Maybe 128 kbps mp3 can sound bad (although that depends a lot on the kind of music), but that's an aging codec anyway. I think encoded files in the 192 - 256 kbps range is the best, and 128 kbps ogg's often acceptable, especially with the DFX plugin (or similar) for Winamp to compensate for shortcomings in compressed formats.

      I'd definitely not call 128 kbps in modern codecs "disgusting". In ogg's I've found it to be roughly as 160-192 kbps mp3's and that's perfectfly fine for my ears.
    • 128kbps doesn't cut it. It's an absolute lossy, disgusting bitrate, no matter what it's in. They should test similar file sizes instead of by bitrate, to determine whether something is good or not
      perhaps you'd like to try my new lossy codec, then. it throws away all the music save for the first note, but then that gets encoded at 3 Meg/sec.
    • huh? you do understand that bitrate==filesize? or don't then.

      like, kb-per-second. you multiply that with the time(of the song) and you'll end up with a plain kb value that *simpsalapimpsa* is the filesize. so in effect they *were* testing what you wanted, in what format will a certain size(128kbps * songtime) provide the result that sounds best.

      insightful my ass... please, if you don't understand something please don't go on commenting it. besides, 128kbs is enough for most purposes on some of the formats
  • Speakers (Score:3, Insightful)

    by yuckymucky ( 591284 ) on Thursday May 13, 2004 @04:59PM (#9144809)
    How do you bas a listening test on the web? People with crappy speakers are going to say that all of them sound bad yet the people that have the better speakers are going to have the better responses. This should be something that is done in a controled environment so that the hardware that is playing back the audio is standard.
    • Re:Speakers (Score:2, Insightful)

      by vxvxvxvx ( 745287 )
      So what? Sure, the people (majority) with crappy speakers will give the same rating to everything, and if they were the only ones the test would tell you that. However, as the results aren't all the same obviously some people are taking the test who have better speakers. In the end, I'd much rather have the test done on a wide range of speakers to rule out the speakers favoring a certain codec.
  • by rnbc ( 174939 ) on Thursday May 13, 2004 @05:00PM (#9144822) Homepage
    Yes... certainly this kind of listening test is important to access the capabilities of each codec.

    But in the real world other factors may be more important to chose a coded, like for example general acceptance, freely available code and specs, and a large content base available.

    You see: performance will increase allways in all codecs with time... so this kind of testing is only a minute factor amongst others.
    • But in the real world other factors may be more important to chose a coded, like for example general acceptance, freely available code and specs, and a large content base available. ...whether or not the worlds largest operating system vendor embraces said format, extends said format and includes said format in its media player which comes as standard on its 95% personal computer market share...
    • [It] may be more important to chose a coded, like for example general acceptance, freely available code and specs, and a large content base available.

      Sure, we'd never want what's subjectively best but should accept what's generally available. I opt that you listen to music through the telephone for the rest of your life.

      I'll set up the juke box in the sky you seem to crave. I'll rig a little server up that will answer the phone with voice recognition. Any song you ask for will be searched for, download

    • But in the real world other factors may be more important to chose a coded, like for example general acceptance, freely available code and specs, and a large content base available.

      Large content base depends upon acceptance. Acceptance often depends mainly upon the quality of the codec (at particular bitrates).

      performance will increase allways in all codecs with time... so this kind of testing is only a minute factor amongst others.

      People don't care what codec will be decent 2 years from now, people wa

  • by jfroot ( 455025 ) <darmok@tanagra.ca> on Thursday May 13, 2004 @05:03PM (#9144867) Homepage
    Why does anyone still use 128kbps? I hate it when I download music (legal ;) and the only bitrate available for the song i want is 128. With 200GB+ hard disks being so affordable these days and everyone having high speed, I think everyone should encode their (mp3||ogg||aac) at 192 or 256.
    • Any encoder sounds great if you throw enough bits at it; the trick is sounding good when the bit reservoir is shallow.

      Same deal for MPEG-2 encoders, they all look great at 7 Mbit+/sec but the real test is 3-4 Mbit/sec.
    • Because when you are dealing with portable digital audio, storage still costs.
    • Not everyone runs out to buy shiny new drives the second they're released, only to pay triple what they're worth in 2 weeks. I generally use 5-10 gig hard drives, so I stick with 128. Besides, 128 sounds just fine to me. If you don't like 128, go ahead and rip and share at 192 or 256 and see how many people are interested.
    • VBR? (Score:3, Informative)

      by twitter ( 104583 )
      With 200GB+ hard disks being so affordable these days and everyone having high speed, I think everyone should encode their (mp3||ogg||aac) at 192 or 256.

      Vorbis does variable bit rate and you set the quality you want. That way you don't waste lots of bits where they are not needed. My 4MB ogg file sounds as good or better than my little brother's 6MB mp3. The difference is more songs on my 256MB compact flash card. Yes, it's easy to play that music on my Zaurus, which cost about as much or less than DR

    • That was my first reaction, who uses 128. What I want is a blind test with experts and thousand dollar audio systems to find at what point the experts are no longer able to tell the difference between the compressed and uncompressed audio.

      I use `lame --preset standard`, which ends up being VBR in with a max of 110-290, hovering mostly around 190-210 range. It's one of the reasons I don't use OGG, it doesn't have any preset's so I'm supposed to just decide on a good level myself. I'd rather use something th
  • There used to be a great site called r3mix.net, which, IIRC, did some spectral analysis on some of the assorted compression algorithms (trying various different options for them). It was focused on the LAME mp3 encoder, but also looked at a few others.

    They also had some great forums for info on music ripping/preferred encoding methods/CD burning/etc.

    Now, that URL goes to some lame "sponsored mp3 links" site.

    Anyone know why r3mix.net died and if there's any new site that makes a good replacement?
    • by DeeKayWon ( 155842 ) on Thursday May 13, 2004 @05:51PM (#9145391)
      r3mix.net died because people actually did objective analysis of his recommended LAME settings and found they were crap. IIRC, the main guy behind it wasn't very accepting of criticism. Plus, he was a message board spammer [arstechnica.com].

      The best replacement for r3mix.net in my opinion is HydrogenAudio [hydrogenaudio.org] . The forums are frequented by a lot of professionals, as well as developers of LAME, FLAC, Nero AAC, Musepack, Wavpack, and other codecs.

      • Are these the same professionals who claim that one form of digital connection is superior to another?

        When working at Sony I discovered that Audio professionals were still caught in the analog days, and would for instance insist on say, fiber optic, over another purely digital data link between connections claiming something was lost in the sound and they could "hear" it.

        Of course that's ridiculous, once converted to digital either all the ones and zero's get from one piece of equiptment to the next or no
    • by JebusIsLord ( 566856 ) on Thursday May 13, 2004 @06:25PM (#9145746)
      The r3mix tuning (--r3mix), while a small step forward, was inherently flawed because of his insistance on tuning based on pictures instead of acual listening tests. As a result, the --dm-presets were invented and improved by Dibrom (the HydrogenAudio founder) along with a multitude of testers. eventually those were included in LAME as the --alt-presets (and in the latest version they just replace the normal --presets). In short, Hydrogen Audio is THE place to go for this stuff now.
  • here [rarewares.org]
  • by Anonymous Coward on Thursday May 13, 2004 @05:22PM (#9145079)
    When you listen to compressed audio over inexpensive speakers / headphones, you can't hear the difference. With my Sony Studio Monitor headphones, I lost the difference at about 250k with mp3, so I started using 320K as that was the best at the time. Then I bought $2000 Martin Logan Mosaic Speakers, and the original CD was clearly better than even the 320K bitrate. So now I only do lossless compression. That's fine at home, but in any other environment, there's usually so much noise and distractions that even if you had excellent headphones or speakers, you wouldn't appreciate that little difference lossless brings over 256K or even 128K.
    • Myth.

      Compression artifacts are more audible on headphones, but neigh any set will do just fine provided a quiet listening environment.

      You my friend are a victim of placebo. Go to Hydrogen Audio and perform some double-blind analysis on yourself. You'll find that anything over LAME --preset standard (roughly 192kbps VBR) sounds exactly the same on any equipment.

      • If I may politely disagree with both of you..... I think it may be your soundcards

        I recently bought electrostatic headphones, (stax) complete with valve amplifier which are pretty much the headphone ultimate reference.
        I rip to flac and then use dbpower to go to LAME --preset standard.

        The thing which I did which made a BIG difference was to buy a high quality second hand DAC (D to A converter) for $100 made by Meridian. Because hi-fi people are sad and lonely and spend all their money on new stuff, these D
    • You know... If you use Sake soaked Wooden Speakers [slashdot.org] it would sound even better... (and probably cost at least 3x as much)
  • by Gumber ( 17306 ) on Thursday May 13, 2004 @05:31PM (#9145172) Homepage
    I'd read the thread when they were discussing which version of Apple's ACC codec to use for the test, and concluded based on a few samples that the new version was subpar.

    If they'd included both versions of iTunes/QuickTime in this test, perhaps they could have helped shame Apple into fixing what they broke.
  • Why was Vorbis forked?

    And more importantly, why didn't they take advantage of the chance to give it a better name than Vorbis? "aoTuV"? WTF?

You can tune a piano, but you can't tuna fish. You can tune a filesystem, but you can't tuna fish. -- from the tunefs(8) man page

Working...