Comment Re:Holy flawed methodology, batman... (Score 1) 378
- They selected 1,000,000 random pics from the web, without any selection for compression quality. And srsly, are they trying to tell me that *google* doesn't have access to a sufficient number of raw images?
- They compared the algorithms at PSNR around 40, which is not that highly compressed.
- They make a big deal out of the fact that the advantage of using their algorithm is greater for small (low-res) pics... I would assume (without any data to back me up) that low-res pics on the web tend to be more highly compressed to begin with. I'm assuming this because small pics would tend to not be photographs, and because if you use low resolution, you're probably trying to save bandwidth and web space, so compressing more would be logical.
And anyway, these are by no means the only problems with what they're doing.
- As others have pointed out, where are the standard pictures everybody uses to compare compression quality?
- Why did they arbitrarily compare the algorithms at PSNR=40?
- Comparing with jpeg at this point is like kicking a puppy. The comparisons with j2k is meaningless (see above).
- If they're just trying to create a better alternative to jpeg without the patent hassle, they should say so. But in that case, what's wrong with promoting any of the existing algorithms?
- The main problem with jpeg is that it's used blindly for all kinds of images, and it was simply not designed for that. Suggesting that one new algorithm should take over everything that jpeg does right now is idiotic. The right replacement at this point depends on what the image is you're trying to compress. E.g. j2k is good for large photographs at relatively high bit rates. Png is actually very good at things like line drawings. Etc...