Consider this simple/simplified example: Say I have 8 sets of data, and I simply I want write a program that adds all the member of each set, and add those 8 sums to get the grand total.
I am as a novice programmer could easily implement this using a couple loops, but if, say, the dataset are scaled up gazillion times, how do I take advantage of the multi-processor/cores system to improve the performance?
So the specific questions are:
1. In Linux, is there a way to write the program so that one core calculate first 4 sets, and the other core calculate the second 4 sets? Is there C-libraries like pthread or something like that for this purpose?