So there seem to be several questions as to why people would want to use CUDA when an open standard exists for the same thing (OpenCL).
Well, honestly, the reason why I wrote this was because when I started, OpenCL did not exist.
I have heard the following reasons why some people prefer CUDA over OpenCL:
Additionally I would like to see a programming model like CUDA or OpenCL replace the most widespread models in industry (threads, openmp, mpi, etc...). CUDA and OpenCL are each examples of Bulk Synchronous Parallel models, which explicitly are designed with the idea that communication latency and core count will increase over time. Although I think that it is a long shot, I would like to see more applications written in these languages so there is a migration path for developers who do not want to write specialized applications for GPUs, but can instead write an application for a CPU that can take advantage of future CPUs with multiple cores, or GPUs with a large degree of fine-grained parallelism.
Most of the codebase for Ocelot could be re-used for OpenCL. The intermediate representation for each language is very similar, with the main differences being in the runtime.
Please try to tear down these arguments, it really does help.
Nothing big today - just an RFID Terminator Gun. It basically fries any RFID chip in range. Not sure what good it is, unless you want to play a trick on your friends and family by frying their passports. Big fun.
It is easier to change the specification to fit the program than vice versa.