Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment "High level" programming environment? Sigh. (Score 2) 76

The fact that writing C and Fortran code using a message passing library constitutes a high level programming environment is a complete indictment of the sad state of parallel programming today. Seriously, do you want to be programming complex parallel algorithms on HPC machines using Soviet Era technology? I've tried that and it made me want to jump out a window. It's about as easy to program in this type of an environment as it is to program an FPGA (hint: it's a pain in the ass).

Comment Ah, but that's the point! (Score 1) 154

The entire reason why CUDA works and is powerful is exactly because it is limited. Nvidia knows that there is no silver bullet. They're not claiming that this is one (David Kirk has said so himself at conferences). CUDA is a fairly elegant way of mapping embarrassingly data parallel programs to a large array of single precision FP units. If your problem fits into the model, the performance you get via CUDA will smoke just about anything else (except maybe an FPGA in some scenarios).

Your notion about particular models making some parts of parallel programming easy while other parts are hard is what people really need to learn to accept about parallel programming. If you're expecting a single model to make everything easy for you, trust me, stop programming right now.

You need to pick the programming model that matches the parallelism in your application- there will never be one solution. When sitting down to write code, you have to ask yourself: what is the right model for this algorithm? Is it:

Data parallel (SIMD, Vector)
Message Passing
Streaming (pipe and filter)
Sparse Graph

There are many models out there, and many languages + hardware substrates for these models that will give you orders of magnitude speedup for parallel programs. They key is to just to sit down, think about the problem, and pick the right one (or combinations).

The real research focus in parallel programming should be to make a taxonomy of models and start coming up with a unified infrastructure to support intelligent selection of models, mixing and matching, and compilation.

Slashdot Top Deals

You can't go home again, unless you set $HOME.