Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Media Movies Software

More On The BBC's Codec 'Dirac' 278

TioHoltzman writes "El Reg is reporting about a new codec that is built on top of wavelet technology and seems to offer performance that is "roughly in line with the Video Codec 9" from Microsoft. The project has been released as open source on SourceForge. This looks like it might be really interesting." (Previously mentioned a few weeks back.)
This discussion has been archived. No new comments can be posted.

More On The BBC's Codec 'Dirac'

Comments Filter:
  • by Eric Smith ( 4379 ) * on Monday May 10, 2004 @07:04PM (#9111756) Homepage Journal
    The Sourceforge page says that Dirac uses arithmetic coding. Aren't there patents on arithmetic coding? I thought that was the problem with using JBIG for bilevel images, and why most free compressors use Huffman coding or the like.
    • by pjt33 ( 739471 ) on Monday May 10, 2004 @07:10PM (#9111807)
      Bear in mind that arithmetic encoding would only be patentable in the US. It could create problems for Sourceforge, but it's unlikely to create problems for the BBC.
    • by goombah99 ( 560566 ) on Monday May 10, 2004 @07:20PM (#9111879)
      Only certain implementations of arithmetic coding are patented. see here for a list. [ross.net]. One of those happens to be the form specified for Jpeg which makes it unusable for jpeg. presumably one could come up with another form. on the otherhand using arithmetic coding on top of a highly compressed object is not likely to improve its compression a lot.

      as for wavelet compression being a novel codec, what about apple's pixlet technology?

      • I hope you're right goombah99. Personally I wonder if Dirac can be incorporated into the Xiph suite to suplement the ogg theora codec (I googled for 'xiph dirac' and already came up with a zero-content article about BBC competing for title of wierdest codec name with xiph, but nothing with more meat). I also wonder if it would be worth it, not knowing teribly much about video compression and streaming... Theora just went Alpha 2, so it's probably further along in development, if that means anything.
      • Pixlet (Score:5, Informative)

        by Anonymous Coward on Monday May 10, 2004 @08:05PM (#9112206)
        as for wavelet compression being a novel codec, what about apple's pixlet technology?

        Pixlet is designed for real-time editing, so it has minimal artifacts and no interframe compression. Dirac is for broadcast, so it is much more agressive about compression and can take advantage of motion compensation and other computationally expensive compression techniques.

        You are right, however, that wavelets are not at all a new compression technology. People started playing with it at least 10 years ago and JPEG-2000 uses wavelets for still photo compression. I think that the computational load has prevented their use in video until recently.
  • patents? (Score:5, Informative)

    by Lazy Jones ( 8403 ) * on Monday May 10, 2004 @07:04PM (#9111758) Homepage Journal
    Last time I checked, wavelet compression methods were burdened by many patents: google search [google.com]. What does that mean for users of the codec?
    • Re:patents? (Score:2, Informative)

      by Anonymous Coward
      BBC, Europe, no patents.
    • Re:patents? (Score:5, Insightful)

      by Anonymous Coward on Monday May 10, 2004 @08:07PM (#9112227)
      Most of them patent's will be in the US. So they don't matter. No offence to our american cousins on the 'dot, but you so often hear about "this is illegal under the DMCA" or "The've been granted a pantent from the USPO for this" or "The RIAA will come and take your first-born for this" or "The FBI will be busting down your door under the Patriot Act right about now...". It doesn't matter if they have got 'rights' under the DMCA for something because for 96% of humanity, the DMCA is a piece of meaningless toilet paper. No offence to our american friends, as I said, but as this is from the BBC, it only matters what's been done here in the UK. Until that un-democratic european nightmare inflicts more total garbage legislation onto us in the form of software patents and we get our very own version of corporate fascism. Then we'll all be stuffed.

      Looks interesting though. I think a lot of people ignore or marginalise the beed, when they've come out with a hell of a lot of innovation in their time. Let's hope this is one of the 'biggies' that they're responsible for.
      • Re:patents? (Score:2, Flamebait)

        by n1ywb ( 555767 )
        Helloooooo EU software patents!!! Welcome to OUR world!
      • While I am not a lawyer, I assume that SourceForge could get into some trouble for distributing a patent-infringing work even if the BBC are untouchable.
        • Re:patents? (Score:2, Interesting)

          by mrogers ( 85392 )
          Nah, they just need to distribute the code in archive form, compressed with wavelet technology. Then if you can uncompress the archive you obviously have a license to use wavelet technology, or live in a country where it's unpatentable. ;-)
        • Re:patents? (Score:5, Informative)

          by HuguesT ( 84078 ) on Monday May 10, 2004 @10:23PM (#9113221)
          What's the problem with distributing patented technology in source form? I believe this is legal. As an example VTK [kitware.com] distributes the marching cube patented method (among others) with no problem.

          Unisys never had a problem with any of the LZW implementation in source form. They never asked for them to be pulled out of any site, and neither could they legally. What they asked is if you were using this technology for anything other than research and study (i.e. if you really wanted to compress some file with it for redistribution) *then* you needed a license from them.

          The use of patented methods for research and study is legal, this is the whole point of patenting technology. Patenting is a publication process, in exchange for exclusive control of the technology *in applications*. The idea is that other people can study this technology and improve on it.

          If you as a user take some source code floating on the net implementing some patented technology, and add it to some application, be the application free or not, you are responsible for obtaining a license from the holder of the patent, but AFAIK the author of the code is in the clear, and so are the distributors.
      • Re:patents? (Score:3, Insightful)

        by evilviper ( 135110 )

        It doesn't matter if they have got 'rights' under the DMCA for something because for 96% of humanity, the DMCA is a piece of meaningless toilet paper.

        Well, we should put this in context though. Sure, the US only makes up 4-5% of the world, but the largest portion of the people in the world are thinking about how they are going to get their next meal, and don't even have any devices with any form of video playback, so they could care less about codecs.

        In addition, and most importantly, the USA's 5% makes

        • Re:patents? (Score:5, Insightful)

          by Goth Biker Babe ( 311502 ) on Tuesday May 11, 2004 @05:02AM (#9114851) Homepage Journal
          Sure, the US only makes up 4-5% of the world, but the largest portion of the people in the world are thinking about how they are going to get their next meal, and don't even have any devices with any form of video playback, so they could care less about codecs.

          I have a reply. GSM, DVB, DAB. All of these technologies are doing well despite the US not being a market. Two of them are the defacto standard outside of the US with some small exceptions. The other is becoming a standard.

          You've got China and India and they are not as backward as you thing. The US is less than half the size of Europe, numbers wise. Add South America, Australisia, the Middle East, Asia etc and I'm afraid the US is rather out numbered by thriving markets who can afford the technology.

          the USA's 5% makes up most of the scientific research in the world

          Quote your source. This is complete bullshit. They do make up a large amount of the research but definitely not the majority.

          Stop believing all that propaganda you keep hearing.
      • "un-democratic european nightmare inflicts more total garbage legislation"

        Oh come now, it's not that undemocratic. If it was we wouldn't have been able to lobby against the corporations and win [newsforge.com]. Compare this with what happened in the US, where the govt bended over backwards to help the corporations.
  • New codec? (Score:5, Insightful)

    by DiscordOfFive ( 778099 ) on Monday May 10, 2004 @07:04PM (#9111760) Journal
    Call me a zealot, but I think things are better off open source, doubly so in the case of codecs. I mean, it's a media encapsulation. If a codec is open, then the potential for cross-platform success is much better. Potential for profit may go down, but I'm talking innovation, not wallets.
    • Re:New codec? (Score:5, Interesting)

      by Eric Smith ( 4379 ) * on Monday May 10, 2004 @07:12PM (#9111823) Homepage Journal
      Agreed! Imagine if there were several patented forms of written language, and you were required to buy special licensed reading glasses that decoded your book. You'd need different glasses for each publisher, and you would not be allowed to make your own glasses, nor to publish your own books without licensing a special publishing system. The idea sounds so outrageously unreasonable that no one would be willing to put up with it, yet this is exactly what Microsoft, Apple, Real, and the media companies are doing to us with digital media.

      Everyone should read Stallman's essay The Right to Read [gnu.org]. When I first saw it, I thought it was so implausible that there was no need to worry about it. But since then I've observed much of the groundwork for this dystopia being laid. It is absolutely vital that consumers be educated to reject commercial technologies that take away their rights (including fair use), and instead prefer free and open technologies such as Dirac (assuming that it doesn't run into patent problems).

      • Re:New codec? (Score:3, Flamebait)

        by ron_ivi ( 607351 )
        Parent wrote: "You'd need different glasses for each publisher, and you would not be allowed to make your own glasses, nor to publish your own books without licensing a special publishing system. The idea sounds so outrageously unreasonable that no one would be willing to put up with it, yet this is exactly what Microsoft, Apple, Real, "

        You forgot to mention Adobe - the one company who actually imprisoned someone [ebookweb.org] by doing exactly what you described.

        • That's unfair on Adobe. They withdrew the complaint practically immediately when they heard that Sklyarov had actually been detained, but the FBI continued pursuing it anyway.
    • Re:New codec? (Score:4, Interesting)

      by GeekyGurkha ( 775031 ) on Monday May 10, 2004 @07:13PM (#9111832)
      The potential for profit may well go down. The BBC is paid for by a license fee, and is not-for-profit. No ad breaks, notice the lack of advertising on www.bbc.co.uk .
      Apparently the BBC is planning on allowing people to watch TV shows after thay are broadcast form the website. This codec development could be related to this.
    • Re:New codec? (Score:2, Informative)

      by EvilGrin666 ( 457869 )
      Potential for profit may go down, but I'm talking innovation, not wallets.

      In theory, the BBC hasn't been all that interested in profits, being a non-profit taxpayer funded organisation. I was starting to wonder what I payed a license fee for, but if they carry on like this I'll be quite happy to keep paying it.

      I welcome the BBC's foray into OSS, and I hope it'll be the first of many OSS sucesses for them.
    • Re:New codec? (Score:3, Insightful)

      by RupW ( 515653 ) *
      If a codec is open, then the potential for cross-platform success is much better.

      Only if there's a driving force to adopt the new standard. (Witness ogg/vorbis.)

      The BBC do a lot to drive new technology - they've done computer and web education drives in the past, they're spending a huge amount of money on digital terrestrial channels that don't get audiences to drive adoption of that, they force-fed new technologoy to the kids on Radio 1 with webcams, SMS votes, etc., before everyone else caught on.

      You
  • by Anonymous Coward on Monday May 10, 2004 @07:04PM (#9111762)
    Does adding a little note saying "we covered this a few weeks ago" always get the editors off the hook for posting the same article twice? ;)
    • by Anonymous Coward
      You see the new slashcode? It's got a new line in it:

      $story_txt .= " We've mentioned this before."

      &post( $story );
    • Ah but you notice how it almost worked. If that little disclaimer hadn't been there the FP would have said "another editor not paying attention..." etc. And for some reason, it seems to give slashdot users carte blanche to discuss everything - almost to the word - that they discussed last time. Hmm, what did I say in my comment to the previous post, and will it gain me more karma if I post again? :)
  • could be hopeful (Score:3, Insightful)

    by da2 ( 542211 ) on Monday May 10, 2004 @07:05PM (#9111769)
    i hate to state the obvious, but this could be good for open source, that is having a big name such as the BBC behind it, it should also mean that linux (and other non MS OSs) could be able to use anything the bbc develop/publish with it, cross platform content on demand anyone?)
    • having a big name such as the BBC behind it, it should also mean that...

      Maybe so... but the BBC's reputation seems to be sliding down a slippery slope when it comes to being a reputable, reliable publisher/broadcaster. A telling off for the BBC [aardvark.co.nz].
      • Given the author was bankrupted for being unable to correctly add up and file taxreturns for 8 years, it may pay to take his numbers with a pinch of salt, too.
        • Given the author was bankrupted for being unable to correctly add up and file taxreturns for 8 years, it may pay to take his numbers with a pinch of salt, too

          Are you sure that was really the reason he was bankrupted? It would appear, at least from the comments of one former US Defense Department official (as made on this TV documentary [interestingprojects.com]), there may have been a *lot* more to it than that.
    • by awol ( 98751 )
      It is even more than hopeful. One things the Americans never really grokked (and to some extent for good reason) is the power of a great public institution. I love the BBC. It has flaws and makes mistakes both in specific cases and at an institutional level, but the one thing that makes it great is the "public utility mandate".

      The reason for this greatness is that these mandates mean that there is the potential to use its clout to formulate these kind of public standards, codecs, browser standards, docu
  • What other methods? (Score:4, Interesting)

    by El Pollo Loco ( 562236 ) on Monday May 10, 2004 @07:06PM (#9111779)
    This type of performance is roughly in line with the Video Codec 9 which Microsoft uses in its Windows Media Player and only slightly less than the H.264 international standard.

    So what methods do these other compressions algorithms employ? I couldn't figure it out from google. It seemed as though H.264 was related to mpeg4? Also, is there a rough guess as to how effective wavelets will be when they're better developed?
    • by steveha ( 103154 ) on Monday May 10, 2004 @07:16PM (#9111851) Homepage
      The standard way to compress both audio and video is with the Discrete Cosine Transform, or DCT. MPEG audio and video are based on DCT.

      The basic idea of DCT is to transform the data into a series of waves, which tends to concentrate the data. Then you throw away part of the data, and then use lossless encoding on what is left. If you just threw away pixels, the result would be obvious in an image; but if you throw away part of the wave specification data, the results are not as obvious.

      With DCT, consistent data sets compress very well (e.g., a blue sky or a white wall). Pictures with lots of sharp little edges (e.g., a field of blades of grass) compress much less well.

      My understanding is that potentially wavelets will compress even better than the DCT. However, they are not enough better to be a huge win at the moment.

      steveha
      • by gidds ( 56397 ) <slashdot.gidds@me@uk> on Monday May 10, 2004 @07:53PM (#9112130) Homepage
        One of the problems with JPEG is that it treats each 8x8 block of pixels separately -- I don't think it preserves any relationship between adjacent blocks.

        This means that when information is dropped in each block (according to the compression required), the edges of blocks suffer in a way unrelated to the edge of adjacent blocks. The result -- as the quality decreases, the edges between blocks become more and more obvious, and the whole image becomes 'blocky'.

        I believe this is one way that wavelet technology improves -- the individual wavelets are spread over the whole image, without regard for any blocks, and so the compression degrades much more gracefully.

        As you say, the DCT converts each 8x8 block into a series of cosine waves, both horizontally and vertically in the block. Then, when it needs to reduce the space, it drops the higher-frequency coefficients first -- this is why sharp edges, with lots of high frequency information, suffer most. (You tend to find that lower-frequency coefficients try to compensate, giving the characteristic ripples near sharp edges.) Areas that are relatively smooth, with only low-frequency information to start with, suffer much less.

        Another way JPEG loses information is by colour. The human eye is much more sensitive to fine changes in brightness than it is to fine changes in colour; so the picture is transformed from RGB into a brightness channel and two colour channels, and the brightness channel gets a greater share of the limited space. It's quite interesting, if you're, er, interested in that sort of thing...

        • Then, when it needs to reduce the space, it drops the higher-frequency coefficients first -- this is why sharp edges, with lots of high frequency information, suffer most.

          This may be a silly question (I've not really got much knowledge of video encoding), but can't the algorithm work out if most of the information in the block is high frequency and in that case start dropping the _low_ frequency components instead?
      • by pantherace ( 165052 ) on Monday May 10, 2004 @08:10PM (#9112249)
        What is even better is fractals. If you can find it, there was a program called fiasco which could do movies and still images, and it wasn't perfect, but the decode was VERY fast and at really really high compression rates, it still looked good.

        It needed some improvements (more searching), and had some faults: around when it came out, it took a 600MHz Alpha (The fastest processor at that time, or darn near it) 24hours for a 30-sec clip, because it used brute force, and the quality was good, and compared to other compression types they all were much larger, and some looked worse. The problem is the difficulty in finding the fractals that will work. Recreating the image is relatively easy.

        • by Stween ( 322349 )
          I've seen images that have been encoded using fractal compression; the compression ratios achievable are pretty damn good :) I seem to recall the issue being that the encode was difficult and *very* processor intensive.

          Although I didn't see it, the lecturer talking about this at the time (he was researching in this area) said he'd seen fractal encodings of images which pull out more detail than was actually in the image that was encoded. Sounds like crazy talk to me though ;)
          • Although I didn't see it, the lecturer talking about this at the time (he was researching in this area) said he'd seen fractal encodings of images which pull out more detail than was actually in the image that was encoded. Sounds like crazy talk to me though ;)

            Unlike an 8x8 DCT (for example), fractal compression is generally scale independent. A block of pixels is represented by a contractive mapping that can be applied to ANY size block. The mapping is applied iteratively and can be proved to conver

          • he'd seen fractal encodings of images which pull out more detail than was actually in the image that was encoded.

            Sure. The fractal encoding is basically specifying the shapes of the various elements of the image, and then when you ask for more detail, it creates some. It's kind of like taking a scanned image, then converting it to a vector image (say, for Adobe Illustrator), then printing it at a large size.

            You can buy fractal filters for Photoshop, that allow you to upsample your images to a larger si
        • A Colorado company whose name I forget used fractal decomposition by hexagonal cells. The advantage of hex cells is that you don't have to worry about corner-adjacency, only face-adjacency => only one processing step per cell, and a simpler process to construct the fractal tree, so it was very fast even then. Each hexagon was either all black or all white, or had an edge. Edge-containing cells were broken into seven smaller hexagons (center + six around), and so forth.

          This system had the advantages t
        • by michael_cain ( 66650 ) on Monday May 10, 2004 @11:40PM (#9113708) Journal
          it wasn't perfect, but the decode was VERY fast and at really really high compression rates... It needed some improvements (more searching), and had some faults: around when it came out, it took a 600MHz Alpha (The fastest processor at that time, or darn near it) 24hours for a 30-sec clip, because it used brute force...

          Indeed. The problem with the effective fractal compression algorithms is basically that, while there is a fast inverse transform to go from compressed to raw form, there is no efficient forward transform to go from a raw frame to the compressed form. There have been some exceptions -- the University of Bath once did a simple fractal compression scheme that went fast in the forward direction, but the compression rates were not very good. TTBOMK, all of the fractal compression schemes that achieve high compression rates require searches over VERY large spaces. If you can develop a fast forward transform, you may not get rich, but you'll be famous within a small circle of mathematicians.

    • by wmeyer ( 17620 ) on Monday May 10, 2004 @07:24PM (#9111916)
      DCT is the underlying mechanism in motion JPEG, MPEG, DV, and others. Wavelets takes a different approach, as mentioned in the first reply.


      While wavelets doesn't offer a breathtaking advantage in data rate vs. quality factor, it does appear to lend itself to a simpler implementation than does DCT, and unlike MPEG, which is very intensive on the encoder, wavelets places symmetrical burdens on encoder and decoder.


      It was a core assumption in the design of MPEG that the world market for encoders was quite small (where have we heard that theme before???) Clearly, the assumption was false, and one advantage of switching to a wavelets technology would be reduced cost per unit for encoders.

      • by L1TH10N ( 716129 )
        It is relatively early days in terms of Wavelet and Fractal technology. Looking at JPEG DCT vs JPEG2000 Wavelets vs Fractals... What makes an image look bad in terms of image quality is the ability of our brain to pick out unnatural patterns from an image. The simplest way to represent an image with less information is to reduce the number of pixels that form the image. Problem is that if we try to do this too much our brain picks up little squares that form the image. What happened with JPEG was that inste
    • AFAIK, H.264 is a compression tecnhology that is going to be incorporated *into* MPEG4. I believe that what people think of as "MPEG4" is actually "MPEG4 - simple profile", and this is why MPEG4 has somewhat of a bad name for quality - it's rather MPEG1ish for higer bitrates.

      H.264 is going to become "MPEG4 AVC", Advanced Visual Codec. This is one of the 3 compression standards due to be approved (or maybe actually approved by now) for HD-DVD. The other 2 are WM9 (love it or hate it) and MPEG2 (for thos

  • by Power Everywhere ( 778645 ) on Monday May 10, 2004 @07:09PM (#9111803) Homepage
    Am I the only one who thinks that Dirac sounds like some sort of monster from the Dr. Who series?
  • by ejito ( 700826 ) on Monday May 10, 2004 @07:12PM (#9111827)
    BBC to Put Entire Radio & TV Archive Online [slashdot.org]

    Spam Vikings await.
  • by steveha ( 103154 ) on Monday May 10, 2004 @07:21PM (#9111890) Homepage
    This is from 1998.

    http://www.seyboldreports.com/SRIP/wavelet/ [seyboldreports.com]

    steveha
  • by syousef ( 465911 ) on Monday May 10, 2004 @07:22PM (#9111901) Journal
    Regardless of patents etc. it doesn't matter that there is something as good as a Microsoft codec. Unless there is a perceived advantage, unfortunately it isn't going to become widely adopted because the huge mass marketing machine that is Microsoft is pushing its technology and making it the easy to use default.

    You only have to look at Mozilla/Firebird which have finally matured into reasonably solid stable products. Netscape innovated, then lost market share and IE got a foothold. Now it doesn't matter to most companies that there is once again a good alternative in Mozilla because it only has a small marketshare. In the case of MP3, it took more of a foothold earlier on but we're already seeing movement towards proprietary formats.

    The only way that the open source community is going to do well here is to provide a single coherent product without branches that is trivial to install and use for the average non-technical computer user. Unfortunately the very nature of open source and free software makes this difficult, because you have to reach a consensus amongst a diverse range of very intelligent people with very different politcal agendas. Choosing a single united front is a huge challenge.

    Forget the codec for a moment. If I want to install the latest client operating system from Microsoft there is only 1. (This is the ideal - I know we've had Me/98/XP running concurrently but that's still only 3). How many Linux distributions exist - each version with its quirks and styles. It may be fantastic from the point of view of evolution of the software. Its not going to get users switching over.
    • Forget the codec for a moment. If I want to install the latest client operating system from Microsoft there is only 1. (This is the ideal - I know we've had Me/98/XP running concurrently but that's still only 3). How many Linux distributions exist - each version with its quirks and styles. It may be fantastic from the point of view of evolution of the software. Its not going to get users switching over.

      First of all, this is stupid. Imagine if we did away with all that pointless branching into different c
      • First of all, this is stupid. Imagine if we did away with all that pointless branching into different car manufacturers (who needs all of Toyota, Nissan, Ford, GM etc. each with their quirks and styles) and just had a single make and model of car? Much easier right? Huh?

        The automakers have distinct and instantly recognizable product lines for each segment of the market. None of the hundreds of Linux distributions have the visibility of Windows or the Mac or as clearly defined a target audience.

    • by in7ane ( 678796 ) on Monday May 10, 2004 @07:53PM (#9112134)
      "The only way that the open source community is going to do well here is to provide a single coherent product without branches"

      May be true for other things, but definitely not true for codecs, you can have multiple codecs loaded and not experience any problems/inconvenience (like if you were switching word processors back and forth) with switching between playing files using different ones. Think of how much trouble you have playing a VCD, DVD, DivX (MPEG-1,2, and several implementations of 4).

      Keep in mind this will also likely be driven by a HUGE (and quite good quality - it's BBC) media library being available in this format.
    • by Joe Tie. ( 567096 ) on Monday May 10, 2004 @07:54PM (#9112141)
      I disagree. Mozilla and its relatives have a low share because most of the potential userbase doesn't have much understanding of the merits - or even of its existence. People doing video encoding on the other hand, or at least encoding video for non commercial use on the internet, usually have a fairly solid grasp of what the options are. And the end user isn't left with much choice, they double click on a file or they don't get to watch the movie or TV show. I'm going to use xvid as an example. Much less known than DivX, real, or wmv. But it's one of if not the most commonly used codecs for large video files on the internet.
      • End users are going to make up the majority of users for a successful product. The end user of the codec is not going to encode the video, they're going to watch it. If the video doesn't simply play in their web browser forget it, the majority won't bother - they've got busy lives and other things to do with their time.

        The fact is the majority of users don't watch DivX's. If you could click on a link and play one without installing additional software, and if they all had cheap bandwidth they might.
    • Unless there is a perceived advantage, unfortunately it isn't going to become widely adopted

      Yes, this has always been true.

      However, perhaps we're reaching a time when being unrestricted, open-sourced, and freely available is perceived as an advantage -- by enough people to tilt the balance?

      Or at least, so we can hope... And maybe we can help to make it happen?

    • I was thinking the same thing last night watching the UT2004 installer copying all these OGG files to my drive. Oh wait....
    • by MancDiceman ( 776332 ) on Monday May 10, 2004 @08:11PM (#9112252)
      Firstly, the BBC is a much, much, MUCH bigger mass marketing machine in the UK than Microsoft will ever be. This codec is being paid for by every household in the UK that owns a TV set, because we're the ones who pay close to US$200/year for a license which goes directly to the BBC. The BBC are open sourcing it, but the archive project everybody is talking about will only be available to the UK audience for free, and post-Hutton might not happen at all (it was a Greg Dyke baby). So, let's see - if it does happen, the entire BBC back catalogue being made freely available in this format to the entire UK and you think this format will fail? Quite frankly, what planet are you on?

      Secondly, IE "won" the browser wars because it was the best browser. It still is. The reason? Developers still code to the IE "spec", not W3C. In addition it's page loading/rendering speed and start-up is much faster than Mozilla. Simple fact, live with it. Mozilla is exactly what OSS is not supposed to be, particularly on Unix - it's 100% bloatware. Even on my 'nix boxes I have IE running under WINE because it's better.

      Your last two paragraphs completely miss the point of the codec. The BBC is not releasing this for Linux users. They're creating an open format that they still control. They want us to put the time and effort into making it perfect so that everybody can share it. This has always been the way the BBC has worked from technical innovation through to it's creative stance - it gets the people who pay for it, involved in it. They do not care if the implementation makes Linux more viable - they will take any codec work and deploy it for the UK masses on windows. If they decide to release that particular build of it to you for free, be grateful.

      Mark my words, within five years DIRAC will be bigger than MP3 is now.
      • Just one question. Have you even used Mozilla or Firefox lately? It certainly use to be a buggy piece of garbage that I avoided for years (after Netscape 4.7) but I'd argue its now better and more stable than IE (which isn't hard).

        Mark my words, within five years DIRAC will be bigger than MP3 is now.

        You're all too willing to predict the future. I wish I had your prescience. You may be right but like Bill Gates and Larry Ellison have been to the tune of many millions and they're the successful ones.
      • Even on my 'nix boxes I have IE running under WINE because it's better.

        You are suicidal my friend. The first things I do with Windows boxes I have to use is to remove IE, OE and the pesky MS Messenger. They are security threats. Hell, I wouldn't even use Windows if I had the choice but they are paying me to. Best browser my arse, what do you with thousands of popups that accumulate after an hour of browsing?
    • Regardless of patents etc. it doesn't matter that there is something as good as a Microsoft codec. Unless there is a perceived advantage, unfortunately it isn't going to become widely adopted because the huge mass marketing machine that is Microsoft is pushing its technology and making it the easy to use default.

      Agreed. Also, unless it is bundled with Quicktime, Real or Windows Media, then it will also be a hard sell. At the same time remember we are talking about a video codec here, so it still needs som
    • Open Source codecs are going nowhere until they can deliver a DRM solution for internal corporate use and public - licensed - distribution.
    • unfortunately it isn't going to become widely adopted because the huge mass marketing machine that is Microsoft is pushing its technology and making it the easy to use default.

      Oh? Then tell me why most everything is available in Divx, and not WM9+ASF?

      In the case of MP3, it took more of a foothold earlier on but we're already seeing movement towards proprietary formats.

      We are? Could have fooled me. I don't remember seeing very many music files being encoded in WMA nor in RealAudio. Most of them are s

  • I think I recall correctly that the ogg is just a trnsport to carry any typoe of codec so this makes perfect sense in my humble opinion
  • Common availability (Score:4, Interesting)

    by nostriluu ( 138310 ) on Monday May 10, 2004 @07:39PM (#9112021) Homepage
    There are lots of great or just good enough codecs out there. Having an open source codec would be great, but the biggest problem today is not getting the best/freest codec but instead is making it available from the average browser. From a practical point of view, it might be more worthwhile resigning oneself and exerting effort to make common formats (Windows, Quicktime) work well from a Linux computer (from my understanding the Mplayer plugin won't stream Windows/Quicktime).

    Not that this type of research should be discontinued, of course, but from the numerous projects I've been involved in that used streaming media, common availability was the biggest problem... we often had to produce video for Windows, Quicktime and Real. There are some environments (technophobes, corporations, and government) where you can't install a new plugin.

    In fact I think a Java based media streaming applet might be a great solution, since Java has pretty good saturation (although *sigh* there is no entirely free software or open source Java implementation at this moment).
    • Java is also way too slow for a HDTV codec (which is the only way the BBC will support HDTV, but even so, they want something close to DVD quality for this stuff) without some serious oomph put behind it.

      Anyway, do you actually want to watch TV programs on your computer? More likely you want something that has the storage and networking functions of your PC, but also makes full use of your plasma screen or projector. In which case, you're looking at a custom media-centre PC. In which case, you can use cust
      • > Java is also way too slow for a HDTV codec

        This is, of course, pure bullshit. I've written
        codecs in C and in Java, and the performance of
        Java can often be faster than C for typical codec
        tasks like block DCTs. Not typically as fast as
        Fortran, mind you, but on par with optimized
        Common LISP, which is also faster than GCC on
        x86 and PPC.

        I've never made a similar comparison for fixed-point
        or SIMDized code, so I'm only making the claim
        for floating-point.
        • This is, of course, pure bullshit. I've written codecs in C and in Java, and the performance of Java can often be faster than C for typical codec tasks like block DCTs.

          What codecs do you know of that are written purely in C?

          Most of them make extensive use of assembly to pick-up performance. Since you can't have a java applet integrated with assembly code, but you can have C intigrated with C code, the choice is clear.

          Also, I am skeptical of your claim anyhow. I've seen lots of attempts to put codecs in

    • Re:plugin (Score:3, Informative)

      by dollargonzo ( 519030 )
      mplayer plugin plays whatever mplayer plays. and mplayer plays quicktime [mplayerhq.hu]i have been watching apple trailers for quite some time
    • by hak1du ( 761835 ) on Monday May 10, 2004 @09:01PM (#9112645) Journal
      There are lots of great or just good enough codecs out there. Having an open source codec would be great, but the biggest problem today is not getting the best/freest codec but instead is making it available from the average browser.

      Yes, and why are so few codecs available? Two reasons: (1) most codecs out there are a software engineering mess and hence hard to integrate into anything, and (2) most of them are heavily covered by patents and copyrights so people can't just write a plug-in and distribute it.

      Something like Dirac holds the promise of letting people create simple, self-contained, freely distributable players that either play stand-alone or can be easily plugged into browsers. Furthermore, the same is true for encoders, allowing people to create content more easily.

      And, unlike MPEG encoders, which have lots of weird parameters and flags, Dirac looks like it is simple enough that making high-quality encodings does not require a Ph.D.

      In fact I think a Java based media streaming applet might be a great solution, since Java has pretty good saturation (although *sigh* there is no entirely free software or open source Java implementation at this moment).

      Well, even there, a simpler format can help: something like Dirac is probably a whole lot easier to re-implement in Java than something like MPEG4.
  • by Anonymous Coward on Monday May 10, 2004 @07:39PM (#9112023)
    (This is an excerpt from the book 'Surely You're Joking, Mr. Feynman!' and is for everyone here who has, or hasn't, heard of Paul Adrien Maurice Dirac, the namesake of this new codec. It also conveniently fits in with the two articles about Japan that made their way onto Slashdot today.)

    While in Kyoto I tried to learn Japanese with a vengeance. I worked much harder at it, and got to a point where I could go around in taxis and do things. I took lessons from a Japanese man every day for an hour.

    One day he was teaching me the word for "see." "All right," he said. "You want to say, 'May I see your garden?' What do you say?"

    I made up a sentence with the word that I had just learned.

    "No, no!" he said. "When you say to someone, 'Would you like to see my garden?' you use the first 'see.' But when you want to see someone else's garden, you must use another 'see,' which is more polite."

    "Would you like to glance at my lousy garden?" is essentially what you're saying in the first case, but when you want to look at the other fella's garden, you have to say something like "May I observe your gorgeous garden?" So there's two different words you have to use.

    Then he gave me another one: "You go to a temple and you want to look at the gardens ..."

    I made up a sentence, this time with the polite "see."

    "No, no!" he said. "In the temple, the gardens are much more elegant. So you have to say something that would be equivalent to 'May I hang my eyes on your most exquisite gardens?'"

    Three or four different words for one idea, because when I'm doing it, it's miserable; when you're doing it, it's elegant.

    I was learning Japanese mainly for technical things, so I decided to check if this same problem existed among the scientists.

    At the institute the next day, I said to the guys in the office, "How would I say in Japanese, 'I solve the Dirac equation'?"

    They said such-and-so.

    "OK. Now I want to say, 'Would you solve the Dirac equation?' -- how do I say that?"

    "Well, you have to use a different word for 'solve,'" they say.

    "Why?" I protested. "When I solve it, I do the same damn thing as when you solve it!"

    "Well, yes, but it's a different word -- it's more polite."

    I gave up. I decided that wasn't the language for me, and stopped learning Japanese.
  • by The Ape With No Name ( 213531 ) on Monday May 10, 2004 @08:21PM (#9112329) Homepage
  • by hak1du ( 761835 ) on Monday May 10, 2004 @08:23PM (#9112347) Journal
    So, can anyone explain how one might use Dirac? Does it plug into transcode? Mplayer? Any other kind of Linux player, DVD ripper, or streaming server/client?
    • At this point you're not really supposed to use it, you are supposed to develop it.

      Its time will come, assuming enough people are interested and contribute. I don't know anything about audio or video compression so I already counted myself out!

  • by ca1v1n ( 135902 ) <snook.guanotronic@com> on Monday May 10, 2004 @08:33PM (#9112418)
    The great thing about wavelets is how they work at arbitrary resolution without much of a performance hit. Edges look like edges. Since you can basically make a general description of an image and just keep adding more detailed wavelets until you've got the compression/quality ratio you're looking for, and you can define quality however you'd like. One of the ideas for JPEG2000 is to have a field in image tags to specify how much of the image a browser should download, so you'd only have to keep one copy on the server. (By the way, where the hell is JPEG2000?)

    The above just takes advantage of spatial similarity (if a pixel is one color, it's neighbors are probably similar), but you can also take advantage of temporal similarity (if a pixel is one color in this frame, it's probably a similar color in the next one). You can also do motion compression, though when you get to that level of optimization you generally lose the symmetry between sender and receiver resource consumption. Of course, that might just be another CS dissertation away.
  • Grammar (Score:2, Funny)

    by Surreaberal ( 74723 )
    Please don't ever again say the words "wavelet technology," it sounds retarded--it might make someone want to introduce some "fist technology" to your "face technology."
  • From reading all these comments, it seems like nobody knows that there are already a few open source codecs. Frankly, it's getting annoying that I have to repeat myself...

    The VP3 codec [wikipedia.org] has been BSD-licensed, and unencumbered by patents since Sept, 2001. And, every major Unix media player can playback VP3.

    Despite what you may have heard from doom9, VP3 is also extremely competitive with MPEG-4 (slightly better IMHO) and I know that I can convert MPEG-2 video to MPEG-4 in nearly-perfect quality, at about

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...