I am writing my own (multi-threaded) software and recently I had a chance to do a test run on an intel i7 processor (8-core, 2.67GHz) to compare it with my old Athlon II X4 (3GHz). Both programs compiled with the same version of GCC (4.6.1), both compiled with -O3 optimization. Running 8 threads on the Intel machine was only marginally faster than running 4 threads on the old Athlon. The threads were independent, so no threads were inactive while waiting for something else to finish.
Where Intel have the lead is in the compiler business. Back in 2003 or so they released their ICC 8.0 for free for Linux users. I was writing only single-threaded software at the time, and simply re-compiling it with ICC made it run about 5 times faster than the version compiled with GCC 2.96. And that was on a 2GHz Athlon XP.
What AMD have done right is the integration of the CPU and GPU allowing them to gobble up the console market. However, their bet that all developers will jump on the heterogeneous computing bandwagon did not pan out. But with HSA 1.0 coming up their lead will be too large and neither Nvidia not Intel will have a competitor ready for the next console refresh. All that Nvidia will do is to continue to pay game developers to optimize their engines for GeForce cards, and refuse to optimize for Radeon. AMD's resources are so limited that they will be forced to have a desktop version of their console processor, and maybe an ARM core for good measure.
Exiting the "dense server" are makes perfect sense, as the market is very limited. Running across many small cores is hard and developers will avoid it. It is the same story as taking advantage of the GPU, which also provides many simple cores.
So no, they are not dead, they are simply adapting to market realities and accept that they made a mistake when they jumped in the dense server bandwagon. Unlike Intel, who even now refuse to let go of the Itanium.