Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×

Comment Re:Lots of moving parts (Score 1) 449

Some things we just can't segregate, such as the name cache. Shared locks only modestly improve performance but it's still a whole lot better than what you get with an exclusive lock.

What is the challenge with the namecache, specifically? If its due to being LRU then there are approaches to mitigate the lock. A buffering approach like this Java cache batch updates to avoid lock contention. Another technique is to take a random sample to be probabilistically LRU, like Redis does.

Comment Re:Umm? (Score 1) 250

As a nit, many algorithms that seem fundamentally linear can, in fact, be parallelized. A classic stack (last-in, first-out) seems strict since there is a single point of contention (the top of the stack). However, using an elimination technique allows entries to be transfered between the consumer and producer without updating the stack and thereby supporting concurrent exchanges. Similarly a tree is often used for maintaining sorted order (e.g. red-black) but concurrent alternatives like skip-lists provide similar characteristics. Another low-level example is an LRU cache where every access mutates the eviction order can be made concurrent by using an eventual consistency model to delay updates until required (e.g. writes). As these algorithms are worked out by experts who resolve their bugs prior, often times consumers of the libraries just need to use them with some cases needing to be aware of what can be done safely/atomically.

At an application-level, while many problems cannot be parallelized, Gustafson's Law provides an answer to Amdahl's dilemma. While the speed-up of a single user request is limited, the number of user requests increase and these can be performed in parallel (task parallelism).

So there are quite a number of opportunities even for problems that seem fundamentally linear and that customers/developers can get for free.

Comment Re:Actually... (Score 3, Interesting) 141

As an outsider, that isn't what I see. AMD has bought most of its core technology rather than designing it from scratch. The K6 was from NexGen, the bus from DEC (Socket A, HyperTransport), the Athlon was a great traditional design (P6/Alpha/PowerPC-like in ideas), the memory controller experience came from Alpha hires, their embedded chip is based on Cyrix's, etc. AMD has been quite good at taking proven ideas and implementing them for the mass market with a lot of success. The primary innovations they are given credit for is the memory controller on x86 (first done Transmetta Crusoe), HyperTransport (DEC), and multi-core (IBM Power).

Intel always seemed to be an innovative company that heavily funds R&D, but can have utter flops by not being pragmatic enough to drop a bad design. While they fail badly, the ideas are usually quite unique and I'm sure educational. The fact that they recover rather than repeatedly making bad calls (e.g. Sun) shows that they are resilliant. Having the different design teams probably helps to both recover from a flop and not corrupt creativity by allowing groups to go into different directions. As you indicate, though, there are only so many good ideas and the duplication has to be extremely frustrating.

So I'm not sure if Intel's approach is bad and they tend to be more innovative than AMD. Its costly, though, and as a consumer I've happily gone with AMD/Cyrix/etc when Intel pushes a flop chip.

Do you suffer painful elimination? -- Don Knuth, "Structured Programming with Gotos"

Working...