The popularity of these GPUs baffles me. They are hard to program and very limited in what they can do, not to mention the horrible transfers to main memory, yet because there is no other foreseeable technology coming in the next 5 years or so they are becoming the standard for massively parallel programming on a budget. Any university and its dog has GPU projects, with wild performance claims, usually measuring a code they spend years optimizing for the GPU against the original code running un-optimized on one CPU thread. Yet in the real world there are very few applications of the GPU. The memory transfer bottleneck amplifies Amdahl's law. I work in an mission-critical supercomputing center and it will be years before we adopt the GPUs because of the manpower required to convert existing code, the uncertainty of the future of the technology, the quasi vendor lock-in situation that we have now with NVidia, and the fact that vendor support is not yet where it should be. Yet I am watching this technology being slowly adopted by everyone for lack of a better alternative. Thinking about it, it is pretty sad times that we live in term of supercomuting. Don't believe me? Ask the vendors what exciting new technology they have coming. They don't have any.