Comment Re:I dont understand this statement: (Score 1) 208
>> If the problem decomposes down most neatly into one, three or 6789 threads, then design and write the implementation that way.
Agreed. But the problem is that most programs are inherently serial in nature. Intel and others are targeting multi-core everywhere, not just the highly parallel scientific community, but the average desktop as well.
These Intel tools are trying to solve the problem by letting you write an apparently single-threaded application, that the compiler turns into something multi-threaded under the covers. There's no harm in not exploiting the extra parallelism available, but you're missing out on some potential performance if you don't.
The other approach is to make programmers think about the parallelism. In my experience, most programmers just aren't good at this. Some argue that we need better primitives than just semaphores, queues, etc., but I think it's human nature to think serially and that "thinking parallel" all the time just isn't going to happen.
Personally, I don't think this will be an issue at all for several more years, because systems are typically running an SMP-aware OS and are running lots of processes/threads at a time anyway (just ps -ef or look at the Windows Task Manager and look how much is there!) Users are also becoming more sophisticated and multitasking at the user level, e.g. web browsing, listening to music, whatever else all at the same time. Parallelism at the top should be exploited first and more fine-grained parallelism can be dealt with later, IMO.
Agreed. But the problem is that most programs are inherently serial in nature. Intel and others are targeting multi-core everywhere, not just the highly parallel scientific community, but the average desktop as well.
These Intel tools are trying to solve the problem by letting you write an apparently single-threaded application, that the compiler turns into something multi-threaded under the covers. There's no harm in not exploiting the extra parallelism available, but you're missing out on some potential performance if you don't.
The other approach is to make programmers think about the parallelism. In my experience, most programmers just aren't good at this. Some argue that we need better primitives than just semaphores, queues, etc., but I think it's human nature to think serially and that "thinking parallel" all the time just isn't going to happen.
Personally, I don't think this will be an issue at all for several more years, because systems are typically running an SMP-aware OS and are running lots of processes/threads at a time anyway (just ps -ef or look at the Windows Task Manager and look how much is there!) Users are also becoming more sophisticated and multitasking at the user level, e.g. web browsing, listening to music, whatever else all at the same time. Parallelism at the top should be exploited first and more fine-grained parallelism can be dealt with later, IMO.