Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:There is an alternative approach. (Score 1) 98

It would indeed mean that IBM/Red Hat couldn't restrict the backports, that is true, but it would mean they could focus on any value add (which they could hide). So features they'd exclusively developed, and thus not in the main tree, would stay exclusively theirs and they'd be able to focus more attention on those.

This would be, as you've noted, a massive divergence from IBM's Linux strategy of late. (Back in the day, when they contributed JFS and the POWER architecture, along with a bunch of HPC profiling libraries, they were much less toxic.)

As such, it's highly improbable they'll bite. And, for that reason reason, Linux LTR reliability will inevitably plunge to Microsoft levels of incompetence.

Comment Re:Duh (Score 1) 98

Irrelevant, because there will be an unknown number of pre-existing bugs that cause downtime and the back porting of fixes can also introduce new regressions.

What is relevant is risk. You measure and quantify the risk for each approach. (We have estimates for defect densities and it should be straightforward to get estimates for percentage downtime. We also have estimates of the economic damage from industrial espionage and industrial sabotage.)

The better approach is the one with the lowest average costs, once all costs are calculated. Since M*N developers are already paid for, their additional cost is zero. Only additional developers would need to be costed in.

Since they're no longer fixing issues reported by industry that were already fixed upstream, their work would be entirely on fixing novel issues, so provided they are fixing more defects than the newer kernels are adding, we would be able to show lower risk and thus lower aggregate cost.

The impirtant questions are whether M*N engineers would be enough to fix bugs faster than they're added and how many additional (C) engineers would be needed to achieve this if not. (Since they're now focusing entirely on new issues, average downtime due to kernel bugs will be reduced because the distros would be able to respond faster and not rely so much on pre-existing fixes to backport to placate customers.)

M*N is already priced in, so we can ignore it.

Let's call the average wage for kernel developers A. So what we need to determine is whether A*C is strictly less than the reduction in cost to business due to the reduced risk.

Provided a value for C can be found that can achieve the desired reduction in risk, the consequence of new bugs would be immaterial as the economic damage would be less than the economic damage from downtime due to already fixed bugs.

Until the numbers gave been run, it is sheer guesswork on both our parts as to which would be more economic for businesses. But the defect density of Linux has held fairly steady for a long time, which would imply new bugs are added in direct proportion to new code. And we know how much new code gets added versus churn. So it should not be difficult to get a decent estimate on whose way works better, even without a more comprehensive analysis.

Comment Re:Stability (Score 1) 98

You are correct, which means a calculation is in order. We have economic models that price risk. Those have been around for a while. We can measure the defect density. And we know the curve that defines diminishing returns on investment, which in this case would be bugfixes and regression fixes.

We can quantify the economic damage by hackers (which is substantial), and we know from the defect density of older stable kernels and the reported downtimes the economic damage of undetected pre-existing conditions.

We can therefore determine the number of engineers it would require to fix much newer stable kernels to push the average economic impact to below the average economic impact of the existing approach. This is absolutely quantifiable.

If there are M dustros hiring an average of N engineers to backport fixes, provided the above calculated number of engineers is M*N or less, the distros can collaborate to robustify Linux at no additional cost. Since this cost is built into the current pricing model, it's essentially free stability to the customers.

If the number of engineers needed exceeds this, then provided the number of additional engineers is low enough, the passed-on costs to businesses will be more than covered by the savings those businesses make from the improved stability and security.

If the number of new engineers is too high, thus approach cannot be made economic.

But as far as I know, these numbers have not been run. I know of no published, peer-reviewed economics paper showing whether collaboration or competition between kernel developers is more cost-effective system-wide.

If there is such a paper, feel free to link to it.

But until that paper exists, then it is sheer groundless speculation that either approach is less risky.

Comment Re:Android phones (Score 1) 98

Yes, which is why Android suffers from all kinds of security and stability problems. The very factors that are currently causing Microsoft's market share to decline sharply.

Modelling your business after a failed strategy of the competition doesn't sound a terribly good way to proceed. We need alternatives.

Comment Re:I missed the point where they explain... (Score 1) 98

But if M distros are hiring an average of N engineers to do the backporting, they could collaborate, pooling M*N engineers to proactively hunt down and fix regressions and new defects.

Yes, you can't check every possible path, but that kind of surge in bugfixes would massively reduce the risks of newer stable kernels.

The risk game is quite a simple one. The above strategy will reduce the risk of economic damage. It also greatly increases the number of people who are available to fix newly-reported issues by a factor of M, reducing the interval over which catastrophic bugs impact businesses.

Provided the mean economic damage to businesses from new defects, over the duration those defects last, using the above method is less than the mean economic damage due to hackers and pre-existing kernel defects under the current approach, which will remain over a longer period of time due to the more limited resources, then it is more economic for businesses to adopt newer kernels from those vendors.

Economists have equations for quantifying risk in terms of money, and we know how many commercial-grade distros there are.

It would thefore be possible to do the calculation and determine which approach yielded the best returns to businesses.

The cost to the distros is obviously the same, as the same manpower is involved. All the proposal involves is deduplication of effort and moving from mechanical stuff (which AI probably could actually do better) to value-adding stuff that would increase the market value of Linux over the competition.

Comment Re:Duh (Score 1) 98

You are correct, but are overlooking a possible solution.

You have an average of N software engineers hired by M distributions to backport features. This means that the cost of those N*M software engineers is already built in.

If you hire the same N*M software engineers as a consortium to fix the flaws and regressions in more recent stable kernels, then the software won't break, there won't be the new kernel defects, AND you don't get the security holes.

Cooperation upstream would mean less kernel differentiation, sure, but that's not what most enterprises go by anyway.

It solves all the problems you raise (which are all legitimate) whilst eliminating the very real risks that zero-day security holes create.

Comment There is an alternative approach. (Score 1) 98

Instead of hiring lots of developers for each distribution to backport essentially the same set of features to each frozen kernel, get together and collectively hire vastly more high-end dual-role engineers to proactively find and fix the bugs in newer stable kernels, so that there are far fewer new bugs.

This makes the newer kernels safe for enterprise use, whilst eliminating the security risks.

It costs the same amount, but avoids the reputation-scarring effects of security holes and thus also avoids the economic damage done by those holes.

Everyone gains by fixing the faults as far upstream as possible.

Comment Re:Elon Musk doesnâ(TM)t want it open (Score 4, Interesting) 32

Nobody is actually ahead in AI, because they're all solving the wrong problem, as indeed AI researchers have consistently done since the 1960s.

I'm not the least bit worried about the possibility of superintelligence, not until they actually figure out what intelligence is as opposed to what is convenient to solve.

As for Musk, he's busy trying to kill all engineering projects in America.

Comment Re:There's always something... (Score 1) 32

If there's an issue that needs resolving, it's best to acknowledge it. Hiding away, like Microsoft does with their abysmal records on reliability and security, achieves nothing.

If honesty is a problem, then neither IT nor science seem good professions. Politics and economics might be better.

Comment The data is the code. (Score 4, Interesting) 32

In neural nets, the network software is not the algorithm that is running. The net software is playing the same role as the CPU in a conventional software system. It is merely the platform on which the code is run.

The topology of the network plus the state of that network (the data) corresponds to an algorithm. That is the actual software that is being run. AI cannot be considered open until this is released.

But I flat-out guarantee no AI vendor is going to do that.

Slashdot Top Deals

Dinosaurs aren't extinct. They've just learned to hide in the trees.

Working...