Forgot your password?
typodupeerror

Comment: Re:Possibly Worse Than That (Score 1) 214

by BronsCon (#46783739) Attached to: Click Like? You May Have Given Up the Right To Sue
Can I sue them over the fact that I can no longer enjoy their products, thereby reducing my quality of life? The reason, of course, being one of liability; I can not and will not take on liability for their actions and any possible disastrous consequences of failures of their quality control processes or side effects caused by their current or future products, which is precisely the choice they're giving me: "Forgo being our customer, or take liability for our actions."

Comment: Re:It doesn't. (Score 1) 580

by BronsCon (#46763179) Attached to: How Does Heartbleed Alter the 'Open Source Is Safer' Discussion?
Absolutely! This isn't something that could have been foreseen, but I've been noticing more of a tendency toward "well, I can't stop everything, so why bother" lately, and I'm beyond not sure I like it; I'm sure I don't. You seem to get this, thank you for giving me hope for humanity. :)

Comment: Re:It doesn't. (Score 1) 580

by BronsCon (#46762971) Attached to: How Does Heartbleed Alter the 'Open Source Is Safer' Discussion?
Or, you know, fuzz the hell out of it until you find something, like I said in my post. No source necessary. At least with open source, I can fuzz it until I find a vulnerability, then find the code that caused the vulnerability and fix it.

I mean, I suppose if I got my hands on the source for IE, I could fix that, as well, but why go through the trouble when I can readily obtain the source for a number of other browsers?

Comment: Re:It doesn't. (Score 1) 580

by BronsCon (#46762107) Attached to: How Does Heartbleed Alter the 'Open Source Is Safer' Discussion?
Reading comprehension? You just agreed with me... I ended my comment by pointing out that fuzzing is super-effective. You can ignore the source and just fuzz away with open source, just like you're forced to do with closed. And, as a user, you can fix vulnerabilities in open source software, rather than having to wait for the developer to do so. In fact, as a user, you can fuzz *and* fix your open source application.

That is to say, having the source doesn't make finding vulns easier (or harder, as you imply), it does, however, make fixing them easier.

Comment: It doesn't. (Score 4, Insightful) 580

by BronsCon (#46761181) Attached to: How Does Heartbleed Alter the 'Open Source Is Safer' Discussion?
It's 6 of one, half-dozen of the other.

Anyone can view the source of an open source project, which means anyone can find vulnerabilities in it. Specifically, hackers wishing to exploit the software, as well as users withing to audit and fix the software. But, someone who knows what they're doing has to actually look at the source for that to matter; and this rarely happens.

Hackers must black-box closed source software to find exploits, which make it more difficult than finding them in open source software; the flip-side is that they can only by fixed by the few people who have the source. If the hacker doesn't disclose the exploit and the people with access to the code don't look for it, it goes unpatched forever.

Open source software does provide an advantage to both sides, hackers can find exploits more easily and users can fix them more easily; with closed source, you're at the mercy of the vendor to fix their code but, at the same time, it's more difficult for a hacker to find a vulnerability without access to the source.

Then, we consider how good fuzzing techniques have gotten and... well, as it becomes easier to find vulnerabilities in closed source software, open source starts to look better.

Comment: Re:de Raadt (Score 1) 289

by bmajik (#46761037) Attached to: OpenBSD Team Cleaning Up OpenSSL

Ok, I actually think you, me, and Theo all agree :)

1) We don't think a specific technical change would have _prevented_ the issue.

2) We all agree that better software engineering practices would have found this bug sooner. Maybe even prevented it from ever getting checked in (e.g. suppose the codebase was using malloc primitives that that static analysis tools could "see across", and that the code was analysis clean. Could this bug have existed?)

Comment: Re:de Raadt (Score 1) 289

by bmajik (#46760367) Attached to: OpenBSD Team Cleaning Up OpenSSL

Who has claimed that using the system allocator, all else being equal, would have prevented heartbleed?

Who has claimed that heartbleed was an allocation bug?

I understand what freelists are and do.

The point here is that rigorous software engineering practices -- including the use of evil allocators or static analyzers that could actually understand they were looking at heap routines -- would have pointed out that the code implicated in heartbleed was unreliable and incorrect.

If you read the link you pointed at, after making a modification to OpenSSL such that coverity could understand that the custom allocator was really just doing memory allocation, Coverity reported 173 additional "use after free" bugs.

There are bugs from years ago showing that openSSL fails with a system allocator.

Don't you suppose that in the process of fixing such bugs, it is likely that correctness issues like this one would have been caught?

Comment: Re:de Raadt (Score 5, Insightful) 289

by bmajik (#46759527) Attached to: OpenBSD Team Cleaning Up OpenSSL

Actually, it is you who are wrong.

Theo's point from the beginning is that a custom allocator was used here, which removed any beneficial effects of both good platform allocators AND "evil" allocator tools.

His response was a specific circumstance of the poor software engineering practices behind openSSL.

Furthermore, at some point, openSSL became behaviorally dependant on its own allocator -- that is, when you tried to use a system allocator, it broke -- because it wasn't handing you back unmodified memory contents you had just freed.

This dependency was known and documented. And not fixed.

IMO, using a custom allocator is a bit like doing your own crypto. "Normal people" shouldn't do it.

If you look at what open SSL is

1) crypto software
2) that is on by default
3) that listens to the public internet
4) that accepts data under the control of attackers ... you should already be squarely in the land of "doing every possible software engineering best practice possible". This is software that needs to be written differently than "normal" software; held to a higher standard, and correct for correctness sake.

I would say that, "taking a hard dependence on my own custom allocator" and not investigating _why_ the platform allocator can no longer be used to give correct behavior is a _worst practice_. And its especially damning given how critical and predisposed to exploitability something like openSSL is.

Yet that is what the openSSL team did. And they knew it. And they didn't care. And it caught up with them.

The point of Theo's remarks is not to say "using a system allocator would have prevented bad code from being exploitable". The point is "having an engineering culture that ran tests using a system allocator and a debugging allocator would have prevented this bad code from staying around as long as it did"

Let people swap the "fast" allocator back in at runtime, if you must. But make damn sure the code is correct enough to pass on "correctness checking" allocators.

2000 pounds of chinese soup = 1 Won Ton

Working...