But if M distros are hiring an average of N engineers to do the backporting, they could collaborate, pooling M*N engineers to proactively hunt down and fix regressions and new defects.
Yes, you can't check every possible path, but that kind of surge in bugfixes would massively reduce the risks of newer stable kernels.
The risk game is quite a simple one. The above strategy will reduce the risk of economic damage. It also greatly increases the number of people who are available to fix newly-reported issues by a factor of M, reducing the interval over which catastrophic bugs impact businesses.
Provided the mean economic damage to businesses from new defects, over the duration those defects last, using the above method is less than the mean economic damage due to hackers and pre-existing kernel defects under the current approach, which will remain over a longer period of time due to the more limited resources, then it is more economic for businesses to adopt newer kernels from those vendors.
Economists have equations for quantifying risk in terms of money, and we know how many commercial-grade distros there are.
It would thefore be possible to do the calculation and determine which approach yielded the best returns to businesses.
The cost to the distros is obviously the same, as the same manpower is involved. All the proposal involves is deduplication of effort and moving from mechanical stuff (which AI probably could actually do better) to value-adding stuff that would increase the market value of Linux over the competition.