Indeed. Good luck arguing in court that someone gave up their right to sue. The legal profession tends to be awfully sceptical of such measures, and none more so than judges. While it might stand up if, for example, all parties agreed to use some reasonable form of binding arbitration instead, it's hard to imagine the big company would get anywhere against the little customer under these conditions.
I suspect you meant that sarcastically, but if system software (meaning OS kernels, network stacks, device drivers, etc.) were written in better languages, our computer systems could be far safer and more robust, quality of life could be better, and the benefit to productivity and the global economy could be substantial.
For the computing industry, it is one of the great tragedies of our time that C and its derivatives have become so entrenched. There is absolutely no reason we can't have a systems programming language that offers the necessary low-level control without the limited programming model, error-prone syntax and weak safety features of C.
Unfortunately, it is momentum and ubiquity that keep most of the industry using C and its brethren, not technical merit. The vast ecosystem surrounding C is hard to beat for scale. There is promising work being done in some places, Rust for example, but I know of no practical alternative that is ready for production use today.
Of course, OpenSSL itself isn't running at the level of an OS kernel, so it doesn't need the same degree of low-level access anyway. But there is a wider point here about much more than just OpenSSL.
Please read what Raymond actually wrote in The Cathedral and the Bazaar. My criticism applies equally to his more formal definition of Linus's Law, and to his extended argument as a whole.
No-one (sensible) claims that any code review process will find absolutely all bugs. But Raymond's article seems to be arguing that having enough developers and testers on a project will inevitably get you very close.
And yet, we are talking about this in a discussion about a severe bug in one of the most widely used OSS projects on the planet that went undiscovered (or at least unreported and unfixed) for years.
It is only by having hundred or thousands of them that you can hope to catch those ones that would otherwise go unnoticed.
But how many FOSS projects really have diligent review of all their code by anything like that many people? For many projects, getting a change accepted requires only the approval of one or two others. Activities like the current detailed review of TrueCrypt are the exception, not the rule.
If you really want a dramatic improvement in catching these kinds of bugs and you've already got a respectable code review process in place, you'd probably do better by considering complementary strategies instead of pursuing ever diminishing returns from throwing more people into the same informal code review process. Choose safer programming languages that don't admit certain kinds of programmer error in the first place. Employ formal methods to make sure the underlying algorithms are sound. Adopt different testing strategies.
Sadly, using safer programming languages is still swimming against the flow of mainstream programming tools, while using formal methods or many testing strategies outside of having an automated unit test suite sounds like heavyweight design to some people, and this upsets all the newbies who think being "agile" and "moving fast and breaking things" are the way you make good software when quality really matters.
Improving software quality is in significant part a social problem, but the solution is not requiring more people to be reviewers, it's getting more people to understand that just having more reviewers is not enough.
Raymond's proposition is theoretically sound
No, it isn't. It's nonsense and it always has been.
There is plenty of evidence for the effectiveness of good code reviews, but most of it shows rapidly diminishing returns with the number of reviewers. You get much of the benefit from having even one or two additional people read over something. By the time you've had more than four or five people take a look, the difference in effectiveness from adding more barely even registers, unless one of the additional reviewers has some sort of unique perspective or expertise that makes them not like the others.
Given that almost every major FOSS system software project has had its share of security bugs, there is really very little evidence to support Raymond's claim at all. It's not like it has ever been taken seriously outside the FOSS fan club, but there are a lot of FOSS fans on Slashdot, and so plenty of comments (and positive moderations) reinforce the groupthink as though it's some inherent truth.
Justice is never found in applying the law differently to different groups.
Perhaps. However, there is an inherent inequality here because the law inevitably grants certain additional rights and powers to police officers that are not enjoyed by the common citizen. It is not unreasonable to assign proportionately greater responsibility to them as well.
If their union is so powerful, how come they're subject to routine monitoring in this way at work?
It looks like the negative publicity from a not so great track record is exerting more pressure than anyone's union right now.
That's certainly a plausible alternative, I agree. Any way you cut it, the bottom line is that providing ongoing support for ageing software is very expensive, not to mention actively harming efforts to migrate a customer base to newer and better versions. No business is going to accept that kind of obligation without charging a realistic amount of money for meeting it, one way or another.
And Mark Zuckerberg has a lot more money than me, so I should go start a social network and then I'll surely become a billionaire in a few years.
You can't base credible economic policy and market regulation on carefully selected outliers like that. For every out-of-the-park success story like XP, there is a Vista (or usually many Vistas) where other developers put in time and money on the same scale and failed spectacularly.
If you want to use technology that works on that basis, nothing is stopping you from restricting yourself to Open Source software where you know it will be a viable option.
Microsoft has advertised the length of support that will be available for its major product lines for years in advance of their retirement dates, has already supported some of those products far beyond what any similar software developer in the industry offers, and offers newer products of similar types with ongoing support for many years to come. No-one can seriously claim they thought they were entitled to more than this when they bought Microsoft, and the failure of any large organisations to plan an effective IT strategy given, again, several years of advance notice of what was going to happen, is neither Microsoft's fault nor their responsibility.
to support they could simply release their internal documentation, source code, diagrams etc. to the public
That isn't a simple matter at all if you're still developing new versions of your product based on the same materials. You are proposing that a business whose primary asset is its collective knowledge should be required to give away the most important knowledge it has accumulated, at great cost, up to a certain point, just to absolve it of a hypothetical liability that it was never realistic to assign to that business in the first place.
That would be a fair compromise considering that IT is one of the very few industries that get away with delivering faulty, unstable and insecure products as the accepted norm. If houses or clothes or refrigerators were produced like software...
...then a lot of houses would need expensive repairs after a few years to fix damage caused by subsidence, pests, unanticipated weather conditions, or the neighbours causing damage while doing work on their own property, while cheap clothes would be some of the most frequently returned items in stores because they fall apart after they've hardly been worn due to economising on manufacturing techniques and materials?
People talk a lot about how software is unreliable and breaks all the time, but the reality is that most consumer software is remarkably resilient given the many and varied jobs it needs to do and the cost of making it. I'm writing this on a Windows 7 PC that I've had for several years. I can count on my fingers the total number of times Windows has fallen over, and as far as I know all of them were actually caused by either a hardware failure or a dodgy update to some additional system software like a device driver or security tool, not by Windows itself. Sure, some software isn't up to scratch and the people who make it deserve to be criticised, but I don't think it's fair to claim that software in general is some sort of unusable, bug-ridden mess.
Once software is made, it is trivial to make enough for everyone.
It's that first part that is the kicker, though, isn't it? It's great that digital works can be reproduced and distributed with very low marginal costs, but you still have to cover the sunk costs of the initial development. Those are huge for this kind of software project, and they are typically paid in advance and with no guaranteed level of return. The economic model has to take that into account or it won't work.
No one is suggesting that Microsoft should be compelled to do anything they don't want to do.
Of course they are. They are saying that as well as developing their code to make new products that they then sell (which they presumably want to do) Microsoft should also incur an indefinite, substantial, unfunded obligation to help people competing with those new products and using Microsoft's own code from their earlier products to do it. There is absolutely no business benefit to Microsoft for doing this, and the overheads involved are far higher than could reasonably be justified on the grounds of promoting general competition in the market.
That's a fair point, but I think it's more analogous to requiring Microsoft to disclose details of things like APIs and data formats required for interoperability (as others in this discussion have suggested) than to requiring Microsoft to disclose their source code. I think promoting interoperability and compatibility is generally beneficial, and as such I don't have the same objections to requiring a reasonable level API/format disclosure, where "reasonable" takes into account both the usefulness of any given disclosure and the burden imposed by requiring it.
But two swing out of the realm of opinion, you compare Windows XP to "OpenSource darlings like firefox" whose long-term support is measured in "months, not years". This is a bad comparison.
Fair enough, though it wasn't really meant as a direct comparison, more an illustration of how much effort is required to support old software for extended periods.
A better comparison would be Ubuntu LTS which includes firefox and whose support is measured in years not months.
It is. In fact, the period is now five years for both desktop and server versions.
Again, just to put that in perspective, Windows 7 (two generations after Windows XP) was released around 4.5 years ago.
I think it would be a great idea to require Microsoft to "open up" even if it was outside of their interests. Hell if Windows 8 could not compete with community supported open source XP, it still means that people get better software
Well, it would be great, in the short term, for everyone except Microsoft. But who is going to build the next software product that is so successful that almost everyone uses it for nearly a decade in that world?
Perhaps if Microsoft hadn't developed a series of OS's that treated security like a two-bit whore, none of this would be a major issue.
That's just silly. Microsoft has done a huge amount to further software security over the years, and objectively their track record isn't bad at all compared to other large software vendors. They had the misfortune to be the biggest target in town for a long time (today that "honour" probably goes to Android) and so had to deal with both a relatively high number of attacks and a lot of bad press when something got through.