Comment Various reasons (Score 1) 34
The biggest will be Intel's aversion to QA. Their reputation has been savaged and the latest very poor performance benchmarks for the newest processors will not convince anyone Intel has what it takes.
I have no idea whether they could feed into AI the designs for historic processors from the Pentium up, along with bugs found in the designs, to see if there's any pattern to the defects, a weakness in how they explore problems.
But, frankly, at this point, they need to beef up QA fast and they need to spot poor design choices faster.
Others have mentioned poor R&D. Intel's strategy has always been "cast wide and filter", but this means everything is looked at on the cheap, with less manpower and less cash.
Their experiment with on-board memory was interesting, but you can't do that easily if all your real-estate is taken up with cores and hyperthreading capability. It looks like they saw an idea without understanding why it worked.
There's ways they could get memory closer, which is what gives you the performance gain, without it being strictly on-die, and that might be better.
I'm seeing lots of dual CPU servers, but the more you share L2 cache, the less cache each process can use. A 4-CPU or 8-CPU box should outperform a dual CPU box where each CPU has double the cores. Back when SMP ruled, you had 16-CPU machines. Improving Multi-CPU support would improve servers.
Modern busses can support multiple masters. PCI Express was supporting this as of 2.1, IIRC. There's no obvious reason each core needs to be the same, so long as there's a way to avoid instructions for one going to another.
If you had integer-only CPUs (so the FPU real-estate is used for more L1 and L2 cache), then integer-only work (the bulk of the kernel, for example) would presumably be much much faster, as there'd be less fetching from slower resources.