## Comment Re:massive parallel processing=limited application (Score 1) 79

On a 8-core machine, a processor will be placed into a wait queue roughly 7 out of 8 times that it needs access. Further, The expected length of time in the queue is (1-(1/8)). This is of course, for an 8-core system. Adding more cores results in the waiting time increasing asymptotically towards infinity.

Sorry, that doesn't sound right. The expected length of time in the queue should be on the order of *nt*, where *n* is the number of cores and *t* is the average time required to process a memory-request. (A better formula would use the average length of the queue instead of *n* but to first order it still would be roughly linear with *n*.) So, the time required would increase *linearly* with the number of cores.

You're right, I worded it incorrectly (it's late, and I've been working 80hrs/week for the last year due to a private project. Forgive me). What I meant to say was *"The expected delay when accessing memory is (1-(1/n))", but even that is off by an entire exponent.*

The expected delay is (probability of queueing) X ( probable length queue). The probability of queuing is (1-(1/n)):

With 2 processors, you have a 1/2 chance of getting exclusive access, (1-(1/2)) of queuing.

With 3 processors, you have a 1/3 chance of getting exclusive access, (1-(1/3)) of queuing.

With 4 processors, you have a 1/4 chance of getting exclusive access, (1-(1/4)) of queuing.

With n processors, you have a 1/n chance of getting exclusive access, (1-(1/n)) of queuing.

The probable length of the queue is linearly proportional to n, so the expected delay is (1-(1/n) * n). In terms of performance this is O(n^2) - IOW it's piss-poor performance.

Or maybe I'm still doing the numbers wrong - feel free to derive a better statistic for predicting time-in-queue when processors are all using a single address bus. This is the one I got, and some trivial simulation does actually fit this profile.