Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Why still use blocks? (Score 1) 86

But video can easily be made to run in parallel simply by encoding several frames at once.

That idea with the 3D voxel blob is something I've been thinking about for doing simultaneous frame rate and resolution conversion between interlaced source and destination video formats. Sadly I still haven't managed to start coding on that. Maybe it will fail for the same reason that the video encoder failed. :/

Comment Re:Why still use blocks? (Score 1) 86

Originally, the blocks used in the JPEG image coder was put there to make sure that you could stream-encode images using reasonably cheap silicon back in the eighties.

Your idea is nonsense. Whatever the reason for blocks in JPEG, video codecs NEED blocks. Full stop. Motion prediction/estimation/compensation/vectors works on the block level. [...] H.264 introduced 8x8 and 4x4 macroblocks, in addition to the standard 16x16 macroblock, because the motion vectors on smaller blocks allows it to eliminate more temporal redundancy. VP9 is adopting larger macroblock sizes as well, but that really only helps on a small amount of HD content.

You're sort of contradicting yourself here. You're saying that blocks are necessary. Then you're saying that fixed-size blocks are bad, because you get a better result when you discard the actual encoder block size and impose a motion vector granularity that is better tailored to the data in question. So why not simply decouple the encoder and motion estimators and let the motion estimator run with a dynamic size, maybe even without a square/rectangle restriction. Maybe the motion estimators could then be further improved if they weren't stuck with those sizes.

I simply don't agree that the quantizer/decimator must be limited to a given block size just because the motion estimator algorithm wants to use blocks to predict motion.

Comment Re:Why still use blocks? (Score 1) 86

No other source than having tried it myself for a project that didn't have the memory restrictions or hardware, but severe code space restrictions, and simply running the DCT on the whole image and quantizing and encoding the results with huffman/arithmetics gave a visually more pleasant result with a resulting smaller filesize. Original research I guess.

Comment Why still use blocks? (Score 3, Interesting) 86

I'm a bit worried about the big focus on "blocks" for such a video codec. Originally, the blocks used in the JPEG image coder was put there to make sure that you could stream-encode images using reasonably cheap silicon back in the eighties. No one really wants the blocks, they were a necessary limitation. Using the same algorithm as JPEG but removing the blocks gives a serious quality boost.

This codec will never run on hardware that can't handle more than 16 x 16 pixels at once. The lowest specs that will encode these frames will be hand-held cameras, which will have more than enough ram to buffer at least two full frames, and use a small FPGA for encoding/decoding. Everything else will be decoded by a GPU directly to the framebuffer, and likely encoded by the same GPU. Even server farms have these for processing media.

There's also no issue with streaming as far as I know. Both DCT and Wavelet based coders can packetize the important bits in a frame first, and the less important bit later, so that a slow connection can still decode a degraded image even if not all bits are received. This without splitting the image into blocks.

Slashdot Top Deals

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...