Comment Re:WTF? (Score 1) 188
Sorry, in many large organizations, the sysadmins are not allowed (or able) to change firewall configurations either. And sign-offs, even in emergencies like these, may take a few days.
Sorry, in many large organizations, the sysadmins are not allowed (or able) to change firewall configurations either. And sign-offs, even in emergencies like these, may take a few days.
In many large organizations, you have segregation of duties. This boils down to the sysadmins not being allowed to patch and recompile code. They are allowed to install a vendor patch though. And yes, segregation of duties is a good idea.
Indeed. There is however a certain type of wannabe "hacker" that needs to turn things into power-plays. These will disclose immediately and inflate their ego that way, no matter what damage this does.
I am not talking about giving the manufacturer a lot of time. But if the bug is already exploited in the wild, chances are it has been for a while, so a few more days matter little. However, quite often nothing can be done before a patch is available and then too early public disclosure does a lot more harm than good.
Apparently you have not heard about companies that collapse if they are offline for a day or so. But then, with the level of stupidity your answer displays, you would not have...
And to amplify it in the meantime. Well done.
Not really. Disabling the patch took changing the sources manually and rebuilding OpenSSL, something most sysadmins cannot do or cannot do fast.
I think the main problem with the flavor of responsible disclosure some part of the security community is raging against is that this flavor allows the developers to say how long they need, and that has been abused. But giving them zero time is just malicious.
Sorry, but that really is nonsense. All that immediate disclosure can cause is panic. It does not help at all. It increases attacks in the short-term, because most people are not able to do anything without patch.
Sure, you can argue for very short time given to the manufacturer, like a few days (they will not make that large a difference for the ones already using the weakness, most weaknesses exist for quite a while before they are discovered by the white-hats and analysis also takes some time), and some companies have been abusing responsible disclosure by delaying fixes for months and months, so I am all for that. The thing is that the manufacturer must not be the one to set the time they get to fix this. But giving them zero time is just intentionally destructive.
I don't.
That is BS. You are mixing two things to make your non-existing point: People that want to disclose and people that want to sell. The second ones are always black-hats, even if some of them pretend otherwise.
All to make a quick buck. Despicable and repulsive.
Indeed. But there is a _standard_ solution. Doing it in various ways is far worse than picking the one accepted bad solution.
Mindless propaganda and, as it happens, untrue. See for example http://developers.slashdot.org...
But I guess proponents of closed source will always use any lie that is handy, just to propagate their ideology.
The only possible way is to disclose to the responsible manufacturer (OpenSSL) and nobody else first, then, after a delay given to the manufacturer to fix the issue, disclose to everybody. Nothing else works. All disclosures to others have a high risk of leaking. (The one to the manufacturer also has a risk of leaking, but that cannot be avoided.)
The other thing is that as soon as a patch is out, the problem needs to be disclosed immediately by the manufacturer to everybody (just saying "fixed critical security bug" is fine), as the black-hats watch patches and will start exploiting very soon after.
All this is well known. Why is this even being discussed? Are people so terminally stupid that they need to tell some "buddies"? Nobody giving out advance warnings to anybody besides the manufacturer deserves to be in the security industry in the first place as they do not get it at all or do not care about security in the first place.
Like, say, placing the emergency generators on the hills right next to it, nothing bad would have happened. Of if they had spend the extra $100.000 that would have cost for hydrogen valves, the buildings would not have exploded.
The problem is not that nuclear cannot be made safe. The problem is that the people doing nuclear cannot make it safe. And as these are also the people doing waste storage, this will remain a serious issue for the next, say, 1 million years or so. The combination of greed and stupidity found in nuclear planners is absolutely staggering.
Real Programs don't use shared text. Otherwise, how can they use functions for scratch space after they are finished calling them?