>This is largely horse pucky. FPGAs are a trade off of efficiency for generality. FPGA based
uh... the same holds for von Neumann's CPU architecture ;o)
>coprocessors only provide a benefit where the algorithm can be implemented more efficiently in
>logic than conventional code and it's done at a high enough frequency to warrant the trouble. Few
>situations on the desktop meet these criteria.
'situations on the desktop' a quite well handled by the contemporary CPUs. One can only type that fast.... Desktop is not an issue, no matter multi core CPUs, Cell Processor, FPGA based solutions...
>The only situation that comes to mind is video compression/decompression but that is already
The only 'desktop situation'. There's life beyond desktop....
>accelerated quite well by something on the other end of the efficiency/generality spectrum: the GPU.
...like a bunch of engineering/scientific computions that, in places, are embarassingly parallel and, at the same time, embarassingly simple so that dedicating entire Beowulf node to the unit computation is a waste.
Just as example - check TimeLogic's page (http://www.timelogic.com/)- a large class of bioinformatic computations can be accelerated by 2 orders of magnitude. Note, that it translates into substituting Beowulf clusters
with a single FPGA-based accelerator board. Hardly horse puckey, I'd say...