I think there is a qualitative difference between notifying large end users like Facebook in advance, and notifying people in the distribution system for a general release. It's the former that inherently means the people who aren't large end users with privileged access get left exposed for longer than necessary, and that's what I'm objecting to.
You're latching onto this specific case, perhaps because you have some connection to it, but I'm talking about the general principle here. In general, it is not unreasonable to assume that if a vulnerability has been found by two parties in rapid succession, there may be a common factor involved, which may mean that other parties will also find it in the same time frame, and that an extra day may therefore be very significant.
Obviously most serious security bugs don't sit there for years, then have two groups discover them at almost the same time, as seems to have happened in this case, and need half the known Internet to update their systems as a precaution because no-one really knows whether they've been damaged by the vulnerability at any time over the past couple of years.
ROTFL. Yep, large corporate bureaucracies, they ALWAYS do exactly the right thing, in a matter of hours.
If it's that funny to you, why are you defending giving them a day of advanced warning? Some of us did have a patch rolled out within a couple of hours of the public announcement, but presumably we could have had the patch rolled out a day earlier in the alternative situation. Once again, in this case, one day in two years obviously isn't that significant as we're all going to have to assume keys were compromised and set up new ones anyway. But if this was something that only got committed three days ago, it's a different story.
Since "people" cannot be negative, by necessity (dev team) + (other people) >= (dev team)
You're still assuming that the dev teams, or to be more precise the parts of the dev teams who will actively review new code, are the same size. That isn't necessarily true at all, so the "provided everything else is equal" part of your last sentence is the problem here.
My point is there's no "might" about it - as long as the arbitration clause applies to both parties and the arbiter is a neutral one, it's a perfectly legal and enforceable clause...
It's still highly uncertain whether a court would find a contract to exist at all under these conditions.
Even if it does, you can always go to court and argue for your right to be there because the other guy's term about arbitration is unenforceable for whatever reason. The court might disagree and send you back to arbitration, but they won't stop you coming in the door in the first place.
What I really don't like about the whole statement behind it is the implied assumption that closed source offered any kind of better protection.
Which statement do you think implied that? I don't see anything about it in this thread.
ICU is medical reason. A heterosexual can't visit with children there half the time, and if you fail to fill out the correct HIPPA form, the hospital can't even acknowledge that you exist at all.
My Atari 800 home computer is my longest-lasting, hardest-working electronics device. It was built like a tank (the metal shielding alone weighs several pounds).
Other than that, I suppose my alarm clock. I've had it since 1988 and it just keeps going. Nothing fancy - LED display, just a clock with alarm, no radio functionality or anything like that.
However, no matter how you look at it, the number of people who actually do will always be equal or higher than for closed source software.
Why? I see little evidence that this is happening in general.
Most established OSS projects seem to require no more than one or two reviewers to approve a patch before it goes in, and then there is no guarantee that anyone will ever look at that code again later.
How does that guarantee that more experts will review a given piece of security code than in a proprietary, closed-source, locked-up development organisation that also has mandatory code reviews?
The whole point of OSS is that I do not need to trust it. I can review it if I please.
But you didn't review it and find the vulnerability, did you?
And apparently, despite the significance and widespread use of this particular piece of OSS, for a long time no-one else did either, or at least no-one who's on our side did.
Your argument is based on theory. The AC's point is based on pragmatism. It's potentially an advantage that OSS can be reviewed by anyone, but a lot of the time that gives a false sense of security. What matters isn't what could happen, it's what actually does happen.
Nobody was harmed by hearing about it on Tuesday rather than on Monday
Isn't that assumption where the whole argument for notifying selected parties in advance breaks down?
If you notify OpenSSL, and they push a patch out in the normal way, then anyone on the appropriate security mailing list has the chance to apply that patch immediately. Realistically, particularly for smaller organisations, it will often be applied when their distro's mirrors pick it up, but that was typically within a couple of hours for Heartbleed, as the security and backporting guys did a great job at basically all of the main distros on this one.
As soon as you start picking and choosing who else to tell first, yes, maybe you protect some large sites, but those large sites are run by large groups of people. For one thing, they probably have full time security staff who will get the notification as soon as it's published, understand its significance, and act on it immediately. For another thing, they probably have good automated deployment systems that will systematically patch all their affected servers reliably and quickly.
(I accept that this doesn't apply to those who have products with embedded networking software, like the Cisco and Juniper cases. But they can still issue patches to close the vulnerability quickly, and the kinds of people running high-end networking hardware that is accessible from outside a firewall are also probably going to apply their patches reasonably quickly.)
On the flip side, as long as you're giving advance warning to those high profile organisations, you're leaving everyone else unprotected. In this case, it appears that at least two different parties identified the vulnerability within a few days of each other, but the vulnerability had been present for much longer. There is no guarantee that others didn't already know about it and weren't already exploiting it. In general, though it may not apply in this specific case, if some common factor prompted the two contemporaneous discoveries, it might well be the case that additional, hostile parties have found it around the same time too.
In other words, you can't possibly know that nobody was harmed by hearing about it a day later. If a hostile party got hold of the vulnerability on the first day, maybe prompted by whatever also caused the benevolent parties to discover it or by some insider information, then they had a whole day to attack everyone who wasn't blessed with the early knowledge, instead of a couple of hours. This is not a good thing.
As I did say in my previous post, but you omitted when quoting it, this might stand up if all parties agreed to the arbitration. Sometimes C2C contracts include these kinds of terms, for example.
However, it's going to be tough in most jurisdictions (obviously not everyone in the world is subject to the US legal system) to convince a judge that such a heavyweight term in a contract of adhesion that one of the parties may not even have realised existed should be enforced. For example, in my country we have the Unfair Terms in Consumer Contracts Regulations 1999. If you like, you can search down that page for the words "Compulsory arbitration clauses are automatically unfair for the purposes of most consumer disputes" and you can look up the law itself to see why.
Of course, all of this presumes that a contract even exists in the first place, which is another obvious avenue of attack against this strategy. For example, contracts generally require some form of consideration in both directions. What is in it for the guy who clicked 'Like' to accept such a draconian restriction in return? And if the original action was simply buying cereal from your local store, then the contract is almost certainly between you and the store, not the cereal company. While legal systems have been known to recognise third party rights under some conditions (again, varying by jurisdiction etc.) you'd probably come back to things like whether such terms were an expected part of the contract of sale, and whether they were unfair/unconscionable. And guess who is going to rule on that...
Indeed. Good luck arguing in court that someone gave up their right to sue. The legal profession tends to be awfully sceptical of such measures, and none more so than judges. While it might stand up if, for example, all parties agreed to use some reasonable form of binding arbitration instead, it's hard to imagine the big company would get anywhere against the little customer under these conditions.
You can if the power you are given is entirely superficial. Like Obamacare.
You mean 1997.