The main reason is that Apple has the head start, but the bigger reason is that they have more resources being put to bear and they have an easier job to do.
Intel and AMD basically have to build RISC processors inside x86 translators. This gives them a complexity management disadvantage and a silicon efficiency/availability disadvantage and on top of that, they are starting to get out-spent on R&D. The R&D budgets of Apple + ARM + TSMC all combine to improve Apple's processors and their goals and technologies are well-aligned. AMD benefits from TMSC's R&D but only to the extent that it aligns with AMD's goals. AMD and Intel make the bulk of their profits on server processors, but the silicon process nodes are increasingly being optimized for efficiency instead of speed which advantages Apple's mobile needs over AMD's (and Intel's) server needs.
I think it's pretty clear that we're not only nearing the end of native x86 hardware, but we're probably nearing the end of Von Neumann architectural dominance. As AI and GPU workloads become more important than general-purpose CPU workloads, it will probably be the case that processors are reorganized to address the biggest efficiency bottlenecks facing AI workloads, which is the high cost of memory access. Moving data between CPU and memory is both slow and energetically expensive. Until now, this has been well-addressed by adding large, complex caches to CPUs, but the quantity and the profile of the data being processed for neural network workloads is much different than for general-purpose computing. Companies that are focusing on processors for AI workloads are betting that big architectural changes will increase speed and efficiency by 3x or more.
The next generation of processors will still have RISC cores and probably DRAM attached but will also have more integrated processing and RAM. This might look like two classes of memory or it might look like changes in packaging or it might look like integrating a lot more logic on DRAM chips and connecting them as point-to-point networks instead of synchronized buses.
Let's also face it: most of the brightest guys doing work in processor architecture are at start-ups (or Tesla) building new designs that include RISC-V or ARM, but mostly as a kind of management unit that allows the large NN matrix hardware to run alongside traditional fetch-execute-store cores. At some point, the value of x86 compatibility will be low enough that having hardware dedicated to it just won't be worthwhile. x86 binaries already run well-enough in translation on ARM on both MacOS and Windows. People running server workloads on Linux generally are happy to just recompile on different targets. The inflection point might not be for a year or two, but I think we're going to see AMD and/or Intel both doing architectural shifts to address a market that is clearly expanding very quickly while the old markets are becoming relatively smaller and less profitable.