Irrelevant, because there will be an unknown number of pre-existing bugs that cause downtime and the back porting of fixes can also introduce new regressions.
What is relevant is risk. You measure and quantify the risk for each approach. (We have estimates for defect densities and it should be straightforward to get estimates for percentage downtime. We also have estimates of the economic damage from industrial espionage and industrial sabotage.)
The better approach is the one with the lowest average costs, once all costs are calculated. Since M*N developers are already paid for, their additional cost is zero. Only additional developers would need to be costed in.
Since they're no longer fixing issues reported by industry that were already fixed upstream, their work would be entirely on fixing novel issues, so provided they are fixing more defects than the newer kernels are adding, we would be able to show lower risk and thus lower aggregate cost.
The impirtant questions are whether M*N engineers would be enough to fix bugs faster than they're added and how many additional (C) engineers would be needed to achieve this if not. (Since they're now focusing entirely on new issues, average downtime due to kernel bugs will be reduced because the distros would be able to respond faster and not rely so much on pre-existing fixes to backport to placate customers.)
M*N is already priced in, so we can ignore it.
Let's call the average wage for kernel developers A. So what we need to determine is whether A*C is strictly less than the reduction in cost to business due to the reduced risk.
Provided a value for C can be found that can achieve the desired reduction in risk, the consequence of new bugs would be immaterial as the economic damage would be less than the economic damage from downtime due to already fixed bugs.
Until the numbers gave been run, it is sheer guesswork on both our parts as to which would be more economic for businesses. But the defect density of Linux has held fairly steady for a long time, which would imply new bugs are added in direct proportion to new code. And we know how much new code gets added versus churn. So it should not be difficult to get a decent estimate on whose way works better, even without a more comprehensive analysis.