Comment Re:And the moral of the Story is... (Score 2) 158
A GPU is better suited to some kinds of massively parallel tasks, like video encoding. After all, you're applying various matrix transforms to an image, with a bunch of funky floating point math to whittle all that transformed data down to its most significant/perceptible bits. GPUs are supposed to be really really good at this sort of thing.
And there's your problem.
An h.264 encoder takes a frame of video and splits it up into 16x16 pixel macroblocks. Each macroblock is heavily dependent on those surrounding it (spatially and temporally). For an intra block, a prediction of the content of the current block is made using the decoded content of the top and left blocks. For inter blocks, a previous frame is used as a reference. The decoder has no idea what the original source file looked like, so any predictions made in the encoding process must be from the decoded frames. This leads to massive data dependencies in the encoder which cause a cascade of blocks that need to be encoded before the current block can be.
Many, many, people have come onto #x264dev and tried implementing GPU accelerated encoding, some of them with impressive backgrounds. All of them left once they realised how difficult this problem is.