writes: "It is not rocket science. The solution has been staring us in the face from the beginning. Here are two paragraphs from the article:
The solution to the parallel programming problem is to do away with threads altogether. There is a way to implement parallelism in a computer that is 100% threadless. It is a method that has been around for decades. Programmers have been using it to simulate parallelism in such apps as neural networks, cellular automata, simulations, video games and even VHDL. Essentially, it requires two buffers and an endless loop. While the parallel objects in one buffer are being processed, the other buffer is filled with the objects to be processed in the next cycle. At the end of the cycle, the buffers are swapped and the cycle begins anew. Two buffers are used to prevent racing conditions. This method guarantees 100% deterministic behavior and is thus free of all the problems associated with multithreading.
Speed and Transparency
The two-buffer/loop mechanism described above works great in software but only for coarse-grain objects such as neurons in a network or cells in a cellular automaton. For fine-grain parallelism, it must be applied at the instruction level. That is to say, the processor instructions themselves become the parallel objects. However, doing so in software would be much too slow. What is needed is to make the mechanism an inherent part of the processor itself by incorporating the two buffers on the chip and use internal circuitry for looping. The processor can be either single core or multicore. In a multicore processor, the cores would divide the instruction load in the buffers among themselves in a way that is completely transparent to the programmer. Adding more cores would simply increase processing power without having to modify the programs.
Related article: Parallel Computing: Why the Future Is Non-Algorithmic"