There's a difference?
sysadmin, firewall admin - let's not pick nits here. The point is that there are mitigating measures, and if signing off on something that prevents your company secrets leaking out to the Internet without you even noticing takes more than 24 hours then your incident response procedures are retarded and you can hire me for a workshop to improve them dramatically.
Yeah, there was absolutely nothing anyone could do. Oh wait, except for this brutally complex and technically challenging thing right from the official vulnerability announcement:
This issue can be addressed by recompiling OpenSSL with the -DOPENSSL_NO_HEARTBEATS flag. Software that uses OpenSSL, such as Apache or Nginx would need to be restarted for the changes to take effect.
That was definitely not a feasabole option for anyone on the planet...
You are right on those.
Except for the "nothing can be done" part. That's not your judgement call to make. There is always at least one option - pulling the power plug - and it might well be a feasable temporary solution for some people affected.
It's human nature to occasionally (or always) speed and break minor traffic laws.
Indeed. I just had an argument with a local neighborhood group. They've gone and posted the speed limit at 10kph, but they don't want people to actually drive 10kph and even came out and admitted that... but they got the idea that you set it 10-15kph below what you want people to do, so they set it at 10kph to get people to drive 15 to 25 instead of.
The problem though is that set at 10kph, with the expectation that we drive 15-25 is that legally we're doing 50% to 100% and beyond over the posted speed limit, which as you can imagine is not merely 'speeding' but 'excessive speeding' and 'reckless driving' per the letter of the law. Sure the cops are probably never going to bother with a speed trap to nail me going a measly 22kph, but an automated GPS insurance monitoring system... will probably record that I do double the speed limit habitually... and assess my premiums accordingly.
You're not really going to humiliate yourself by continuing this, are you?
Is it in either the Kerbal Space Program or Elite: Dangerous?
If I can't launch it or blow it up, how can I know if it really exists?
Ok, the envelope game. You can rework it to say the second envelope contains the next vulnerability in the queue of vulnerabilities. An empty queue is just as valid as a non-empty one, so if there are no further flaws then the envelope is empty. That way, all states are handled identically. What you REALLY want to do though is add a third envelope, also next item inquire, from QA. You do NOT know which envelope contains the most valuable prize but unless two bugs are found simultaneously (in which case you have bigger problems than game theory), you absolutely know two of the envelopes contain nothing remotely as valuable as the third. If no bugs are known at the time, or no more exist - essentially the same thing as you can't prove completeness and correctness at the same time, then the thousand dollars is the valuable one.
Monty Hall knows what is in two of the envelopes, but not what is in the third. Assuming simultaneous bug finds can be ignored, he can guess. Whichever envelope you choose, he will pick the least valuable envelope and show you that it is empty. Should you stick with your original choice or switch envelopes?
Clearly, this outcome will differ from the scenario in the original field manual. Unless you understand why it is different in outcome, you cannot evaluate a bounty program.
Now, onto the example of the car automotive software. Let us say that locating bugs is in constant time for the same effort. Sending the software architect on a one-way trip to Siberia is definitely step one. Proper encapsulation and modularization is utterly fundamental. Constant time means the First Law of Coding has been broken, a worse misdeed than breaking the First Law of Time and the First Law of Robotics on a first date. You simply can't produce enough similar bugs any other way.
It also means the architect broke the Second Law of Coding - ringfence vulnerable code and validate all inputs to it. By specifically isolating dangerous code in this way, a method widely used, you make misbehaviour essentially impossible. The dodgy code may be there but it can't get data outside the range for which it is safe.
Finally, it means the programmers failed to read the CERT Secure Coding guidelines, failed to test (unit and integrated!) correctly, likely didn't bother with static checkers, failed to enable compiler warning flags and basically failed to think. Thoughtlessness qualifies them for the Pitcairn Islands. One way.
With the Pitcairns now overrun by unemployed automotive software engineers, society there will collapse and Thunderdome v1.0a will be built! With a patchset to be released, fixing bugs in harnesses and weapons, in coming months.
They didn't, apparently, as they were eager to get their hands on him.
What revisionist history are you imagining?
The *only* reason he's in Russia is *we* trapped him there.
I bought my Focus 2001 keyboard some time in the early '90s because it was cheaper than the IBM but still had clicky keys and a solid feel. I'm using it right now with an AT->ps2 adapter plugged in to a ps2->USB adapter.
While you are technically correct, the reality is that the most serious security vulnerabilities are almost all directly related to buffer overruns (on read or write), allowing an attacker to read or write arbitrary memory. Everything else is a second-class citizen by comparison;
In my fairly long experience, there are ten vulnerabilities introduced at the design stage for every vulnerability caused by bad coding. Buffer overflows might be one of the more common coding errors, but certainly not the main cause of vulnerabilities.
Bill Burr says it best
Okay, I'm obviously missing some important details not being a security expert. Clear a couple things up for me.
1. Do security researchers spend their efforts actively searching for one particular bug using one particular method, or do they try a lot of different things and expect to find a lot of different bugs of varying levels of importance?
2. Do companies looking at their own code for bugs only concern themselves with bugs that would be worth selling on the black market, or is every bug a concern for them?
3. Bit of an opinion question, how much would you consider spending to find a bug to sell for $100k considering the potential failure of the endeavor?
4. Do you think bug bounties are the primary motivation for white hats to research bugs, and if not what effect do they have?
I don't think Mr. Haselton is qualified to answer these.
1: A little of both. I can only speak for myself, but I tend to look at a particular piece of hardware or software, and poke it until I find something interesting. Now interesting doesn't have to be a vulnerability, but it engages the brain. Could there be an exploit in here? And if not, could there be an exploit in other products that use a fairly similar design for something?
I may start looking at product A, and find X interesting, but end up finding a defect Y in product B.
2: Both. You sell not only a product, but a perception that you care about your customers. Besides, most companies have people in decision who wouldn't be able to make an educated decision on what type it was, and underlings whose opinion is tainted because they have a real need to cover their own ass. And the companies certainly won't take the word of a hacker as to what the impact is, so they'll usually err on the side of caution, i.e. treat it seriously.
Note that treating it seriously might mean it will take quite a long time to fix, because taking code seriously also means extensive tests that fixes don't break anything else. A company that has a very fast turnover for security fixes is one that I wouldn't trust much - it's a prime candidate for looking for more problems.
3: You start with a premise that the hunt is to get a reward. I believe that's almost always a false premise.
4: No, I think the primary motivation is curiosity. Unless that;s your primary driver, you will likely not be good at it.
A bounty might make a hacker go to the company after they've discovered the bug, instead of just sitting on it.
Which I think is what mostly happens. You know about a security flaw, but don't want to go to the company given the high risk of being sued in best shoot the messenger style. And you don't want to turn blackhat either, neither for criminals nor governments. But, I repeat myself. And if you're not a kid looking for notoriety, chances are you won't tell anyone.
I am quite convinced there are thousands of unreported vulnerabilities. Bounties might help with that.
IIUC, his lawyers requested that certain materials not be produced, and in doing so quoted a section of the state law which exhempted a particular category of material from being required to be produced. If you don't like the phrasing, talk to the people who wrote the law. His lawyers were just doing their job, and making it easy for the judge.
I don't think they count as science...until the make predictions that match the later observed results. Then they do.
Unfortunately, as you pointed out actually recreating the simulation can be absurdly difficult. And if it's not reproducable, then it's not science.
That said, when I worked at a transportation study commission, we used models all the time. We never deceived ourselves that they were correct, but they were a lot better than just guessing. Policies were built based around their 20-year projections. Often we'd have several very different 20-year projections based on different assumptions about what would be done in between. (Would this transit project be successful? Would that bridge be built? What effect would building the other highway have on journey-to-work times?) The results were never accurate. They were subject to political manipulation...but so was what projects would be built. It was a lot better than just guessing, but it sure was a lot short of science.
I think of this frequently when I read about the models, and the problems that people have with accepting their projections. Usually the problems aren't based in plausibility, but rather in what beliefs make them comfortable. And in those cases I tend to believe the models. But I sure don't think of them as "sound science".
OTOH: Do you trust the "Four Color Theorum"? It's a mathematical proof that any map can be colored with four colors, with no two adjacent patches having the same color except at a single point. The proof is so complex that no human can follow it. Do you trust it? Would you trust it if a lot of money was riding on the result?
Even math is less than certain. Complex proofs are only as trustworthy as every step in them multiplied, and both people and computers make mistakes. There are lots of illusions that prove that people will frequently dependably make the same mistake. So you can't really trust math. But just try to find something more trustworthy. You need to learn to live with less than certainty, because certainty is always an illusion.