Oh boy. A useless metric!
Compression ratio: Sure. But the problem is, it's possible to increase compression ratio by "losing" data. So you can obtain a high ratio, but the images as rendered will be blurry/damaged.
Compression Speed: This is just as dumb since compression speed is partially a function of the compression ratio, partially a function of the efficiency of the algorithm and partially a function of the amount of "grunt power" hardware you throw at it. So one portion of this is a nebulous "hardware norm" factor that can be gamed. The other is a function of the other factor (compression ratio) which can ALSO be gamed (and creates a bias towards lossy compression).
Basically something with a high Weismann number would be extremely lossy compression on high power hardware. Which basically negates the point of high resolution viewing, as any idiot can reduce a 1920x1080 frame to 19px by 11px, and then compress it. I can already take precompressed (and lossy) JPEG files, resample down to 19x11, then back up to 1920x1080. I can wind up reducing a 930K file down to 40K (basically a 95+% savings). And the image is completely indecipherable.
Take a look at an original image versus the same image on the above-described UCCT (UltraCrappyCompressionTechique).
http://cox-supergroups.com/The...
The above image is a PNG to prevent further compression artifacts from creeping into the sample.
The top portion of the image is the original 930K JPEG file.
The bottom portion is the resampled 40K JPEG file.