Derailing this a little, but, speaking of JPEG: Given that the 2nd stage (Huffman coding) is not really the best the world has to offer, even with optimized tables, how come a simple alternative, simply plugging a better compressor (bzip2, lzma, you name it) hasn't surfaced ? we're talking double digit % improvement which can even be applied to preexisting JPEGs with no generation loss. Is the very idea of a different 2nd stage patented by the Stuffit people or what? (only thing a quick google found)
It'd be dead simple to implement (just combine existing pieces, pretty much) in cameras, browsers, viewers, servers (transparently serving this to those who declare in HTTP headers they understand it) and it'd yield a relevant saving in both storage and bandwidth (which isn't necessarily free / cheap when you consider cell links).
P.S.
In case someone reads this and decides to implement it, let me know, and make it F/OSS or Karma's gonna be a bitch.