Why? I've read a lot of articles here on slashdot over the past years about the US accusing China for snooping.
Isn't it fair when it's exposed that the US does the same on an even larger scale, for China to mention this hypocrisy?
Look, if someone does something to me, I know it to have happened. I have all the right in the world to shame him. I have no idea how the f... you get this to require substantial external evidence. She's not writing a wikipedia article.
But video can easily be made to run in parallel simply by encoding several frames at once.
That idea with the 3D voxel blob is something I've been thinking about for doing simultaneous frame rate and resolution conversion between interlaced source and destination video formats. Sadly I still haven't managed to start coding on that. Maybe it will fail for the same reason that the video encoder failed.
Originally, the blocks used in the JPEG image coder was put there to make sure that you could stream-encode images using reasonably cheap silicon back in the eighties.
Your idea is nonsense. Whatever the reason for blocks in JPEG, video codecs NEED blocks. Full stop. Motion prediction/estimation/compensation/vectors works on the block level. [...] H.264 introduced 8x8 and 4x4 macroblocks, in addition to the standard 16x16 macroblock, because the motion vectors on smaller blocks allows it to eliminate more temporal redundancy. VP9 is adopting larger macroblock sizes as well, but that really only helps on a small amount of HD content.
You're sort of contradicting yourself here. You're saying that blocks are necessary. Then you're saying that fixed-size blocks are bad, because you get a better result when you discard the actual encoder block size and impose a motion vector granularity that is better tailored to the data in question. So why not simply decouple the encoder and motion estimators and let the motion estimator run with a dynamic size, maybe even without a square/rectangle restriction. Maybe the motion estimators could then be further improved if they weren't stuck with those sizes.
I simply don't agree that the quantizer/decimator must be limited to a given block size just because the motion estimator algorithm wants to use blocks to predict motion.
I'm a bit worried about the big focus on "blocks" for such a video codec. Originally, the blocks used in the JPEG image coder was put there to make sure that you could stream-encode images using reasonably cheap silicon back in the eighties. No one really wants the blocks, they were a necessary limitation. Using the same algorithm as JPEG but removing the blocks gives a serious quality boost.
This codec will never run on hardware that can't handle more than 16 x 16 pixels at once. The lowest specs that will encode these frames will be hand-held cameras, which will have more than enough ram to buffer at least two full frames, and use a small FPGA for encoding/decoding. Everything else will be decoded by a GPU directly to the framebuffer, and likely encoded by the same GPU. Even server farms have these for processing media.
There's also no issue with streaming as far as I know. Both DCT and Wavelet based coders can packetize the important bits in a frame first, and the less important bit later, so that a slow connection can still decode a degraded image even if not all bits are received. This without splitting the image into blocks.
It must be difficult living in such perpetual fear.
Imagination is more important than knowledge. -- Albert Einstein