Forgot your password?

Comment: Re:PS: how do you think it gets on the distro mirr (Score 1) 151

by Anonymous Brave Guy (#46791395) Attached to: Heartbleed Sparks 'Responsible' Disclosure Debate

I think there is a qualitative difference between notifying large end users like Facebook in advance, and notifying people in the distribution system for a general release. It's the former that inherently means the people who aren't large end users with privileged access get left exposed for longer than necessary, and that's what I'm objecting to.

Comment: Re:Wrong math. 2 years of vulnerability. (Score 1) 151

by Anonymous Brave Guy (#46791385) Attached to: Heartbleed Sparks 'Responsible' Disclosure Debate

You're latching onto this specific case, perhaps because you have some connection to it, but I'm talking about the general principle here. In general, it is not unreasonable to assume that if a vulnerability has been found by two parties in rapid succession, there may be a common factor involved, which may mean that other parties will also find it in the same time frame, and that an extra day may therefore be very significant.

Obviously most serious security bugs don't sit there for years, then have two groups discover them at almost the same time, as seems to have happened in this case, and need half the known Internet to update their systems as a precaution because no-one really knows whether they've been damaged by the vulnerability at any time over the past couple of years.

ROTFL. Yep, large corporate bureaucracies, they ALWAYS do exactly the right thing, in a matter of hours.

If it's that funny to you, why are you defending giving them a day of advanced warning? Some of us did have a patch rolled out within a couple of hours of the public announcement, but presumably we could have had the patch rolled out a day earlier in the alternative situation. Once again, in this case, one day in two years obviously isn't that significant as we're all going to have to assume keys were compromised and set up new ones anyway. But if this was something that only got committed three days ago, it's a different story.

Comment: Re:Not that good (Score 1) 151

by Anonymous Brave Guy (#46791359) Attached to: Heartbleed Sparks 'Responsible' Disclosure Debate

Since "people" cannot be negative, by necessity (dev team) + (other people) >= (dev team)

You're still assuming that the dev teams, or to be more precise the parts of the dev teams who will actively review new code, are the same size. That isn't necessarily true at all, so the "provided everything else is equal" part of your last sentence is the problem here.

Comment: Re:The power of EULAs only goes so far (Score 1) 211

by Anonymous Brave Guy (#46791343) Attached to: Click Like? You May Have Given Up the Right To Sue

My point is there's no "might" about it - as long as the arbitration clause applies to both parties and the arbiter is a neutral one, it's a perfectly legal and enforceable clause...

It's still highly uncertain whether a court would find a contract to exist at all under these conditions.

Even if it does, you can always go to court and argue for your right to be there because the other guy's term about arbitration is unenforceable for whatever reason. The court might disagree and send you back to arbitration, but they won't stop you coming in the door in the first place.

Comment: Re:Not that good (Score 1) 151

by Anonymous Brave Guy (#46788117) Attached to: Heartbleed Sparks 'Responsible' Disclosure Debate

However, no matter how you look at it, the number of people who actually do will always be equal or higher than for closed source software.

Why? I see little evidence that this is happening in general.

Most established OSS projects seem to require no more than one or two reviewers to approve a patch before it goes in, and then there is no guarantee that anyone will ever look at that code again later.

How does that guarantee that more experts will review a given piece of security code than in a proprietary, closed-source, locked-up development organisation that also has mandatory code reviews?

Comment: False sense of security (Score 1) 151

by Anonymous Brave Guy (#46788067) Attached to: Heartbleed Sparks 'Responsible' Disclosure Debate

The whole point of OSS is that I do not need to trust it. I can review it if I please.

But you didn't review it and find the vulnerability, did you?

And apparently, despite the significance and widespread use of this particular piece of OSS, for a long time no-one else did either, or at least no-one who's on our side did.

Your argument is based on theory. The AC's point is based on pragmatism. It's potentially an advantage that OSS can be reviewed by anyone, but a lot of the time that gives a false sense of security. What matters isn't what could happen, it's what actually does happen.

Comment: But what if someone *is* harmed by the delay? (Score 1) 151

by Anonymous Brave Guy (#46788007) Attached to: Heartbleed Sparks 'Responsible' Disclosure Debate

Nobody was harmed by hearing about it on Tuesday rather than on Monday

Isn't that assumption where the whole argument for notifying selected parties in advance breaks down?

If you notify OpenSSL, and they push a patch out in the normal way, then anyone on the appropriate security mailing list has the chance to apply that patch immediately. Realistically, particularly for smaller organisations, it will often be applied when their distro's mirrors pick it up, but that was typically within a couple of hours for Heartbleed, as the security and backporting guys did a great job at basically all of the main distros on this one.

As soon as you start picking and choosing who else to tell first, yes, maybe you protect some large sites, but those large sites are run by large groups of people. For one thing, they probably have full time security staff who will get the notification as soon as it's published, understand its significance, and act on it immediately. For another thing, they probably have good automated deployment systems that will systematically patch all their affected servers reliably and quickly.

(I accept that this doesn't apply to those who have products with embedded networking software, like the Cisco and Juniper cases. But they can still issue patches to close the vulnerability quickly, and the kinds of people running high-end networking hardware that is accessible from outside a firewall are also probably going to apply their patches reasonably quickly.)

On the flip side, as long as you're giving advance warning to those high profile organisations, you're leaving everyone else unprotected. In this case, it appears that at least two different parties identified the vulnerability within a few days of each other, but the vulnerability had been present for much longer. There is no guarantee that others didn't already know about it and weren't already exploiting it. In general, though it may not apply in this specific case, if some common factor prompted the two contemporaneous discoveries, it might well be the case that additional, hostile parties have found it around the same time too.

In other words, you can't possibly know that nobody was harmed by hearing about it a day later. If a hostile party got hold of the vulnerability on the first day, maybe prompted by whatever also caused the benevolent parties to discover it or by some insider information, then they had a whole day to attack everyone who wasn't blessed with the early knowledge, instead of a couple of hours. This is not a good thing.

Comment: Re:The power of EULAs only goes so far (Score 1) 211

by Anonymous Brave Guy (#46787759) Attached to: Click Like? You May Have Given Up the Right To Sue

As I did say in my previous post, but you omitted when quoting it, this might stand up if all parties agreed to the arbitration. Sometimes C2C contracts include these kinds of terms, for example.

However, it's going to be tough in most jurisdictions (obviously not everyone in the world is subject to the US legal system) to convince a judge that such a heavyweight term in a contract of adhesion that one of the parties may not even have realised existed should be enforced. For example, in my country we have the Unfair Terms in Consumer Contracts Regulations 1999. If you like, you can search down that page for the words "Compulsory arbitration clauses are automatically unfair for the purposes of most consumer disputes" and you can look up the law itself to see why.

Of course, all of this presumes that a contract even exists in the first place, which is another obvious avenue of attack against this strategy. For example, contracts generally require some form of consideration in both directions. What is in it for the guy who clicked 'Like' to accept such a draconian restriction in return? And if the original action was simply buying cereal from your local store, then the contract is almost certainly between you and the store, not the cereal company. While legal systems have been known to recognise third party rights under some conditions (again, varying by jurisdiction etc.) you'd probably come back to things like whether such terms were an expected part of the contract of sale, and whether they were unfair/unconscionable. And guess who is going to rule on that...

Comment: Re:The power of EULAs only goes so far (Score 5, Informative) 211

by Anonymous Brave Guy (#46783233) Attached to: Click Like? You May Have Given Up the Right To Sue

Indeed. Good luck arguing in court that someone gave up their right to sue. The legal profession tends to be awfully sceptical of such measures, and none more so than judges. While it might stand up if, for example, all parties agreed to use some reasonable form of binding arbitration instead, it's hard to imagine the big company would get anywhere against the little customer under these conditions.

Comment: Need for better systems programming languages (Score 1) 579

I suspect you meant that sarcastically, but if system software (meaning OS kernels, network stacks, device drivers, etc.) were written in better languages, our computer systems could be far safer and more robust, quality of life could be better, and the benefit to productivity and the global economy could be substantial.

For the computing industry, it is one of the great tragedies of our time that C and its derivatives have become so entrenched. There is absolutely no reason we can't have a systems programming language that offers the necessary low-level control without the limited programming model, error-prone syntax and weak safety features of C.

Unfortunately, it is momentum and ubiquity that keep most of the industry using C and its brethren, not technical merit. The vast ecosystem surrounding C is hard to beat for scale. There is promising work being done in some places, Rust for example, but I know of no practical alternative that is ready for production use today.

Of course, OpenSSL itself isn't running at the level of an OS kernel, so it doesn't need the same degree of low-level access anyway. But there is a wider point here about much more than just OpenSSL.

Comment: Re:Why is Raymond's claim theoretically sound? (Score 1) 579

Please read what Raymond actually wrote in The Cathedral and the Bazaar. My criticism applies equally to his more formal definition of Linus's Law, and to his extended argument as a whole.

No-one (sensible) claims that any code review process will find absolutely all bugs. But Raymond's article seems to be arguing that having enough developers and testers on a project will inevitably get you very close.

And yet, we are talking about this in a discussion about a severe bug in one of the most widely used OSS projects on the planet that went undiscovered (or at least unreported and unfixed) for years.

Comment: Re:Why is Raymond's claim theoretically sound? (Score 0) 579

It is only by having hundred or thousands of them that you can hope to catch those ones that would otherwise go unnoticed.

But how many FOSS projects really have diligent review of all their code by anything like that many people? For many projects, getting a change accepted requires only the approval of one or two others. Activities like the current detailed review of TrueCrypt are the exception, not the rule.

If you really want a dramatic improvement in catching these kinds of bugs and you've already got a respectable code review process in place, you'd probably do better by considering complementary strategies instead of pursuing ever diminishing returns from throwing more people into the same informal code review process. Choose safer programming languages that don't admit certain kinds of programmer error in the first place. Employ formal methods to make sure the underlying algorithms are sound. Adopt different testing strategies.

Sadly, using safer programming languages is still swimming against the flow of mainstream programming tools, while using formal methods or many testing strategies outside of having an automated unit test suite sounds like heavyweight design to some people, and this upsets all the newbies who think being "agile" and "moving fast and breaking things" are the way you make good software when quality really matters.

Improving software quality is in significant part a social problem, but the solution is not requiring more people to be reviewers, it's getting more people to understand that just having more reviewers is not enough.

Comment: Why is Raymond's claim theoretically sound? (Score 5, Interesting) 579

Raymond's proposition is theoretically sound

No, it isn't. It's nonsense and it always has been.

There is plenty of evidence for the effectiveness of good code reviews, but most of it shows rapidly diminishing returns with the number of reviewers. You get much of the benefit from having even one or two additional people read over something. By the time you've had more than four or five people take a look, the difference in effectiveness from adding more barely even registers, unless one of the additional reviewers has some sort of unique perspective or expertise that makes them not like the others.

Given that almost every major FOSS system software project has had its share of security bugs, there is really very little evidence to support Raymond's claim at all. It's not like it has ever been taken seriously outside the FOSS fan club, but there are a lot of FOSS fans on Slashdot, and so plenty of comments (and positive moderations) reinforce the groupthink as though it's some inherent truth.

Comment: Re:Easy fix (Score 4, Insightful) 322

Justice is never found in applying the law differently to different groups.

Perhaps. However, there is an inherent inequality here because the law inevitably grants certain additional rights and powers to police officers that are not enjoyed by the common citizen. It is not unreasonable to assign proportionately greater responsibility to them as well.

You are in a maze of little twisting passages, all different.