S3 bought NumberNine, which was a pioneer in high-end 2D graphics. I bet S3 has a large enough patent portfolio to have some beneficial defensive patents.
Link to Original Source
As a nit, many algorithms that seem fundamentally linear can, in fact, be parallelized. A classic stack (last-in, first-out) seems strict since there is a single point of contention (the top of the stack). However, using an elimination technique allows entries to be transfered between the consumer and producer without updating the stack and thereby supporting concurrent exchanges. Similarly a tree is often used for maintaining sorted order (e.g. red-black) but concurrent alternatives like skip-lists provide similar characteristics. Another low-level example is an LRU cache where every access mutates the eviction order can be made concurrent by using an eventual consistency model to delay updates until required (e.g. writes). As these algorithms are worked out by experts who resolve their bugs prior, often times consumers of the libraries just need to use them with some cases needing to be aware of what can be done safely/atomically.
At an application-level, while many problems cannot be parallelized, Gustafson's Law provides an answer to Amdahl's dilemma. While the speed-up of a single user request is limited, the number of user requests increase and these can be performed in parallel (task parallelism).
So there are quite a number of opportunities even for problems that seem fundamentally linear and that customers/developers can get for free.
As an outsider, that isn't what I see. AMD has bought most of its core technology rather than designing it from scratch. The K6 was from NexGen, the bus from DEC (Socket A, HyperTransport), the Athlon was a great traditional design (P6/Alpha/PowerPC-like in ideas), the memory controller experience came from Alpha hires, their embedded chip is based on Cyrix's, etc. AMD has been quite good at taking proven ideas and implementing them for the mass market with a lot of success. The primary innovations they are given credit for is the memory controller on x86 (first done Transmetta Crusoe), HyperTransport (DEC), and multi-core (IBM Power).
Intel always seemed to be an innovative company that heavily funds R&D, but can have utter flops by not being pragmatic enough to drop a bad design. While they fail badly, the ideas are usually quite unique and I'm sure educational. The fact that they recover rather than repeatedly making bad calls (e.g. Sun) shows that they are resilliant. Having the different design teams probably helps to both recover from a flop and not corrupt creativity by allowing groups to go into different directions. As you indicate, though, there are only so many good ideas and the duplication has to be extremely frustrating.
So I'm not sure if Intel's approach is bad and they tend to be more innovative than AMD. Its costly, though, and as a consumer I've happily gone with AMD/Cyrix/etc when Intel pushes a flop chip.
Sorry, it's "Algorithm Design Manual" by Skiena (2008).
Then you might like "Algorithm Design" (2008). Its a superior, imno, but has slightly less coverage with better depth. My personal favorite algorithm book is "The Art of Multiprocessor Programming".