Comment Re:Notice that performance had increased per cpu.. (Score 2, Insightful) 238
Sorry, but I have to disagree with your conclusion that this represents exponential growth.
The effect you speak of (doubling the number of processors giving less than double the final "power") is due to additional overhead - various processors coordinating their work with each other, deciding things like "Should I split this 2 ways or 4?" and so on - and that sort of stuff inevitably increases with the number of processors.
You can use improved algorithms, special-purpose hardware, etc, etc, to minimize this "friction", but it will always exist, and the percentage of processing that is "overhead" will inevitably climb as you increase the number of processors.
It's far more likely that either the earlier number resulted from some inefficiencies that existed then (due to it not being built as designed yet, perhaps), or there have been improvements in the algorithms or infrastructure which give greater efficiencies.
If it's the latter case, if you unplugged the 2nd half of the CPUs and made the measurement again, you'd probably get 150 GFlops or so.
Basically, you could write the equation for total power something like:
X - O - i**x, where X is the number of processors, O is the basic overhead (for doing things like I/O, for example), and c is the incremental cost of adding each processor.
To have what you describe would require that i**x be a negative number, which is like saying that you can have 10 individual conversations in less time than you can have five. Ain't gonna happen.
The effect you speak of (doubling the number of processors giving less than double the final "power") is due to additional overhead - various processors coordinating their work with each other, deciding things like "Should I split this 2 ways or 4?" and so on - and that sort of stuff inevitably increases with the number of processors.
You can use improved algorithms, special-purpose hardware, etc, etc, to minimize this "friction", but it will always exist, and the percentage of processing that is "overhead" will inevitably climb as you increase the number of processors.
It's far more likely that either the earlier number resulted from some inefficiencies that existed then (due to it not being built as designed yet, perhaps), or there have been improvements in the algorithms or infrastructure which give greater efficiencies.
If it's the latter case, if you unplugged the 2nd half of the CPUs and made the measurement again, you'd probably get 150 GFlops or so.
Basically, you could write the equation for total power something like:
X - O - i**x, where X is the number of processors, O is the basic overhead (for doing things like I/O, for example), and c is the incremental cost of adding each processor.
To have what you describe would require that i**x be a negative number, which is like saying that you can have 10 individual conversations in less time than you can have five. Ain't gonna happen.