Comment Re:Umm? (Score 1) 250
As a nit, many algorithms that seem fundamentally linear can, in fact, be parallelized. A classic stack (last-in, first-out) seems strict since there is a single point of contention (the top of the stack). However, using an elimination technique allows entries to be transfered between the consumer and producer without updating the stack and thereby supporting concurrent exchanges. Similarly a tree is often used for maintaining sorted order (e.g. red-black) but concurrent alternatives like skip-lists provide similar characteristics. Another low-level example is an LRU cache where every access mutates the eviction order can be made concurrent by using an eventual consistency model to delay updates until required (e.g. writes). As these algorithms are worked out by experts who resolve their bugs prior, often times consumers of the libraries just need to use them with some cases needing to be aware of what can be done safely/atomically.
At an application-level, while many problems cannot be parallelized, Gustafson's Law provides an answer to Amdahl's dilemma. While the speed-up of a single user request is limited, the number of user requests increase and these can be performed in parallel (task parallelism).
So there are quite a number of opportunities even for problems that seem fundamentally linear and that customers/developers can get for free.