On The Costs of Full Security Disclosure 269
sasha328 writes "I found reference to this email on the LWN.NET site which was sent to the SecurityFocus mailing list. It asks a very valid question about how much you can disclose before malicious virii can be possible."
Don't release Security Updates! (Score:3, Insightful)
So, since releasing a security patch is equivalent to giving the blackhats full disclosure, no software should ever be patched again. Instead it should be understood that anytime anyone finds a security hole, they need to be quiet forever.
Security by obscurity is never a good thing (Score:2, Redundant)
Just because the exploit only hits MS systems doesn't mean that ONLY MS and "blackhats" should know the details. The more people that know the details of HOW these exploits are possible, the better - as these people will not only put more pressure on MS to actually FIX the problem, but they will also be exposed to the reasons WHY the MS product was vulnerable in the first place.
Some of them might even suggest ways of improving the situation. But that's in a perfect world, and this world is far from perfect.
Just telling people "There's an exploit in IIS that allows malicious intruders to use your system(s) to infect others, install a backdoor, and potentially use your system(s) for other purposes" isn't enough. I know as a system administrator, I'd want to know what port the backdoor was put into, so I could secure it at the firewall. I'd want to know how the exploit was executed, so I could potentially filter out the infection requests. I'd want to know exactly WHAT was making my system insecure, and where, so that in the absence of an official fix, I could work my own fixes, to secure my own system(s) against known intrusions.
Responsible Disclosure (Score:2)
So exactly how did this "pressure" help?
The only "pressure" was on admins who learned that they should read these security bulletins and actually apply patches.
Furthermore, I don't think you understand what Full Disclosure really means, verus Responsible Disclosure.
Nobody is saying that they won't release info telling you what piece is broken, what port the information is coming in, or some sort of tag identifying the issue. (For instance the query string clearly showing up in everybody's web logs)
They're also going to tell you that it's the index ISAPI filter, and you know you are vulnerable because you have the
What they don't need to do is give out a detailed description of how you would write Assembler to take advantage of the hole.
And lastly, what exactly do you think "Security through Obscurity" actually means?
Re:Responsible Disclosure (Score:2)
So exactly how did this "pressure" help?
In this specific instance not at all - as there was already a patch to fix the problem - but what if this had been an unpatched vulnerability? *That* is where the pressure *can* help.
I was arguing a generic point, while you're dealing with a single specific instance =)
Furthermore, I don't think you understand what Full Disclosure really means, verus Responsible Disclosure.
Full Disclosure, IMHO is making all the details surrounding your findings available to the public (in this case, "the public" would probably only consist of sysadmins smart enough to read security bulletins, as Joe Consumer probably wouldn't know the right sites to visit, but the point is that if he did, he could read about it.).
Now, this "Responsible Disclosure" *seems* to mean not telling "the public" ANYTHING about the exploit - even the fact that it exists - and only telling the manufacturer about it. Now, admittedly, in a proprietary piece of software, the only fixes *internal* to that program could possibly come from the manufacturer, but when dealing with networking issues, there are other steps that can be taken by competant sysadmins. Under the "Responsible Disclosure" policy, these sysadmins wouldn't be able to protect their systems until the manufacturer decided to release a patch. Often, by that time it's far too late.
Of course, I could be wrong. =)
Nobody is saying that they won't release info telling you what piece is broken, what port the information is coming in, or some sort of tag identifying the issue. (For instance the query string clearly showing up in everybody's web logs)
They're also going to tell you that it's the index ISAPI filter, and you know you are vulnerable because you have the
Good. That's the sort of information we need in order to take action to protect ourselves. Where the problem lies. What the earmarks of it are. What issues it raises for your network's security. How to combat it.
What they don't need to do is give out a detailed description of how you would write Assembler to take advantage of the hole.
I consider this to be the proof of the theorem, myself. In order to prove the vulnerability exists, you need to show how to exploit it -- if an exploit exists, dissecting it's workings can provide a solution. Sort of like the process of finding an antivenom for a new strain of poisonous snake generally starts with a sample of the venom itself.
Know thy Enemy.
And lastly, what exactly do you think "Security through Obscurity" actually means?
Basically "If we don't publicize the details of a possible exploit, noone will exploit it" -- collary to not disclosing a found vulnerability to the public, instead keeping it a secret, told only to the manufacturer.
The simple act of keeping the workings of an exploit a secret won't stop "blackhats" from using it. People seem to forget that "they" have an information network too -- one that includes code, examples, and no restrictions on who has access to them. In order to keep the damage minimal, ours needs to be as good, if not better. Throwing up walls and saying "You can't know about this" doesn't help anyone -- sure, it may prevent a few kiddies from finding what they need to make an exploit on "whitehat" security sites -- but it won't prevent them from picking it up at easily accessible "underground" sites - and most of all, it won't stop them from using or deploying worms or viruses - they're going to do it anyway.
Keeping the details of how an exploit works a secret only hurts those who would use that information to protect themselves.
Re:Responsible Disclosure (Score:2)
That does not mean no details will be provided to the public. Certainly the vendor will release a patch and a bulletin telling the public what is affected, how to tell if they are impacted, and how to fix or patch. This conversation may very well contain exploit code. But part of the key is that the conversation won't just be limited to the bug finder and the vendor, but actually shared with a variety of peers in the security community. Peer review will result in better understanding of the issue and more intelligent press releases to the public.
It's just the real specific details that are left out of the public disclosure. But if you are a super smart mega hacker, from the limited disclosure you're going to be able to figure out anyway. But why make it easy for the script kiddies?
It's not "security through obscurity" at all.
Re:Security by obscurity is never a good thing (Score:2)
You're using a Linux/FreeBSD mentality and trying to imbue it onto Microsoft administration. It doesn't work that way.
I administer both types of machines, and the philosophy behind even the administration is totally different. The Microsoft approach does NOT require knowing the source, or having a full exploit in your hands, to be a practical administrator. It's all about ease-of-use, allowing the admin to function as an end-user of sorts.
Would I ask for port information and other bits about the exploit? Sure I would. But most MS administrators would not.
the lesser details (Score:2)
I think that if people are willing to accept the limited amount of info about the inner workings of commercial software that companies like Microsoft provide, they should be happy with nothing more than an explanation of "here is the latest update, apply it". Even ISC.org is getting to a point where they will limit the info they give out immediately on fixes for BIND because the admins couldn't get the fixes in place in time before the hackers got word and started breaking into numerous sites. IIS administration would devolve into applying the patch of the week...oh, never mind it already has.
If you don't demand more from their software, you deserve less. In all honesty, there are heaps of people who compile and install Apache and barely know what the code is really up to, making them no better than script kiddies. However, if you so choose to see what the inner workings of Apache are, it's no trade secret. The difference between the blind Microsoft installers and the blind Apache installers is that the Apache developers swarmed to fix the trouble when they got hit, where Microsoft wonders how many people noticed the bug to see if it's worth their royal while to fix it. True, the patch for this problem had been out a while, but many complained that it did not do the trick for them, and more is surely on the way.
the "Full Cost" argument is flawed (Score:3, Insightful)
It was not till exploits started being written, and even further after there were some major epidemics, that unix admins and programmers tightened up on security. (sendmail? ftpd? bind? sheesh!)
Did Microsoft learn anything from these previous incidents? Yes, but very little. The greater cost of cleaning up Microsoft software reflects the widespread use of it by relatively clueless lusrs. But this widespread use means that "total cost" gets spread across a bunch more people. Was the disruption disproportionate to the laxity that Windows sys admins and Microsoft programmers had showed immediately prior to this incident? No, it was a bargain. And trust me, the cost of Code Red cleanup is a bigger bargain if it means that people will wake up to the importance of security in the future.
Right now in the wake of the dot-com collapse, an outbreak of Code Red means we can't get our email for a day or something. But when we are truly dependent on the net in the future it is imperative that we are ready and history shows us that we would not be without a few swift kicks in the pants like this. As I mentioned in another post, I don't think Richard Smith is sophisticated enough to be a leader on issues like this.
Is this the same Richard Smith? (Score:1)
Security through intimidation (Score:2, Interesting)
NTCrash does more than make random calls; it stores what it's doing before it tries it, and after the reboot, avoids doing that again. So after a while, and many crashes, you accumulate a log of new vulnerabilities.
There are later variations on that theme which find more subtle holes. Rather than just making random calls, it's more useful to permute valid calls slightly. That's been tried successfully.
The classic paper on this subject is The Tao of Windows Buffer Overflow [cultdeadcow.com], from the Cult of the Dead Cow.
Considering that all this was known five years ago, there's no excuse for Microsoft products having any buffer overflow vulnerabilities. This falls between "gross negligence" and "reckless endangerment". Where's the plaintiff's bar when you need them?
This is absurd (Score:5, Insightful)
This guy is woefully misinformed, and completely stupid.
Anyone who is subscribed to Bugtraq knows that eEye has already responded to him, and the bottom line is that Code Red is not based (in any small part) on the eEye security bulletin.
This proves that the guy is completely wrong.. because Code Red wasn't based on the eEye bulletin, that means that the "black hats" already knew about the vulnerability.
Like scientists, security professionals rely on an existing body of work. If the only people who had access to this body was the vendors, it would slow down the white hats, making the entire situation worse, not better.
Please do not feed this troll.
Re:This is absurd (Score:2, Interesting)
Re:This is absurd (Score:2, Insightful)
Crackers and Kiddiez have more than just full disclosure in their community. Often they get entire rootkits!!!! Whitehats get advisorys that are often late, vague, and incomplete.
I'd say that gives the black hats have a distinct advantage. They are nimble and unencumbered by the demands of PHBs, laws, morales, and silly dress codes.
Full Disclosure is the one thing we *could* have that they already have, but it's usually under attack from the well intentioned and misguided eliteists who feel that "the unwashed masses" can't benefit from Full Disclosure. Then again, the road to hell is paved with good intentions.
Re:This is absurd (Score:2)
And don't even try to make connections between CmdrTaco and "wind". :)
What? (Score:4, Insightful)
Re:What? (Score:2)
http://www.theregister.co.uk/content/archive/19
Now of course it's a good question as to why the dll is present when Index Server isn't installed, but that's really secondary to the other issues involved.
Code Red Shouldn't Worry You (Score:3, Informative)
How does this relate to the topic, exactly? Well full disclosure makes it easier to find, prevent or fix the ones who might be hiding. If application foo has a hole and a few people know about it, it's easier for one of them to quietly exploit it, possibly for years. And the quiet exploits are much, much more dangerous than the ones that are quickly discovered.
I expect that companies will start lobbying Congress to make security disclosure illegal. Those will be the companies you want to avoid if you want your company's network to be secure.
Re:Code Red Shouldn't Worry You (Score:2)
...compared to total non-disclosure. However, by considering only those two possibilities you fall prey to the fallacy of the excluded middle. Partial disclosure - e.g. of risks and countermeasures but not of mechanics, or preferentially to trustworthy security professionals - can provide the same benefits vis a vis total non-disclosure without total disclosure's problem of teaching the bad guys how to exploit a vulnerability.
my opinion.... (Score:2, Insightful)
1. If it was free software I would make a patch for it, then mail the patch and a full description of the problem to the makers of the product and several other security minded websites so that people would know that they had a problem and have the patch to fix it.
2. If it was closed source propriatory software I would give the company a 5 day head start (even if it was MS) by telling them privately first, then if they hadn't posted a patch yet (which IMO would be irresponsible or inept of them) I would post full details of the problem (and if it was MS maybe even code to exploit it) to force the company to move it's ass and distribute a patch.
Would I keep it a secret? NO. Why not? Simply because if I did, anyone else who came acrossed it could use it to their own advantage and hurt others. Why would I not simply inform the creator of proprietory software of the flaw and no pubicly post it afterwards? Because they can't be trusted. (not period) I have a feeling that they would simply ignore the problem until the average person became aware of it simply because they wouldn't want to waste the money correcting something that isn't well know. They would wait for the next BIG update to fix it which would leave the people ignorant of the problem vulnerable. By giving them 5 days I give them a chance to do the right thing with no pressure, then I release it to twist their arm into doing the right thing, if they can't figure out how to fix it.....tough shit they should've made it open if they can't handle their own software. If they release a patch before 5 days will I still release info and (possibly) code to exploit it? YES!!! Why? Because it is a sad fact that people will not install the patch until it is well known that they need it (take code red for example). By releasing an exploit sure I open the people who haven't patched yet up to abuse, but I also make the problem more known and force lazy sysadmins to get off their asses and get patched, in the end everyone is patched (except the stupid) and less damage has been done than would have before.
My philosophy is "If I can find it, others can find it" and I feel that letting the few people that find it run rampant with it is irresponsible and will cause more damge than bringing attention to the problem which may possibly let a few script kiddies do a small amount of damage rather than let an expert use an unknown method to safely exploit others for years.
Bottom Line? Security through Obscurity is ridiculous and does not work.
How eeye handles vulnerabilities. (Score:2)
Eeye does give Microsoft advance notice before releasing details, but the minimal advance notice they give isn't sufficient for Microsoft to get moving on a fix, much less for thousands of admins to patch hundreds of thousands of servers.
But who is ultimately at fault here? Eeye for releasing the information, or the black hat for writing the worm, or Microsoft for releasing buggy code in the first place?
Give them a Head Start (Score:2, Insightful)
Re:Give them a Head Start (Score:2)
Yeah, but how much time? These many arguments that "full disclosure pushes Microsoft along in releasing the fix" have no grounded basis in reality. Besides couldn't it be possible to rile up the media hype necessary WITHOUT giving information as to how the exploit occurs? The only thing the average user needs to know is "it's a security hole, it's bad".
Re:Give them a Head Start (Score:2)
The shortest turnaround I've seen is about 12 hours for someone else to figure out a hole that didn't have all the details published initially.
These many arguments that "full disclosure pushes Microsoft along in releasing the fix" have no grounded basis in reality.
Sure they do. That is how it happened. Microsoft used to be one of the companies that would hide bugs, slipstream fixes, try to hide details, etc.. this was about 3-4 years ago. They are much better now. Guess why.
Besides couldn't it be possible to rile up the media hype necessary WITHOUT giving information as to how the exploit occurs?
Nope. The reporter wouldn't do a story with no details. There is no story without details. And the bug has to be sexy enough, too.
The only thing the average user needs to know is "it's a security hole, it's bad".
No, they also need to know if it affects them, and what does "Bad" mean. If they don't believe that it is really bad, average users won't bother.
More info (Score:5, Informative)
Re:More info (Score:3, Interesting)
In that thread, Richard Smith asks:
By limited disclosure. Yes, Virginia, there is something between sweeping something under the carpet and laying out all the gory details for everyone (including other would-be virus/worm writers) to see. If a security-product vendor had information that would help their colleagues create barriers, signatures, etc., they could share that information with those colleagues - without having to share it with the entire world. They could release enough information publicly to allow one "skilled in the art" to create countermeasures, without providing a step-by-step recipe that even the relatively unskilled could use to create new exploits. There's no need to reveal *everything* to *everyone*.
So why don't vendors do this? Do they not have faith in their colleagues' discretion? Hmmm. Do they not have faith that their colleagues can develop countermeasures based on partial information faster than black hats can create new exploits? Hmmm again. The *real* reason why they prefer full disclosure is discussed in one of my other posts to this thread.
Smith goes on to say:
BZZZT! Wrong. The security-conscious people can get that benefit without full disclosure, with less risk to the security-naive rabble. Of course, it's in security companies' interests that security-naive people should get hurt, making them less security-naive and more likely to buy products or services from companies such as the one of which Smith is CTO. I sure am glad that I'm not in a business where making sure people get hurt is part of the business plan.
Re:More info (Score:2)
Which means someone has to keep a list of colleagues and everyone with a vulnerability has to make sure to send to everyone on that list. So either some central authority decides who gets on the list or not, or else anyone can add themselves to the list and get added with little or no verification. The first will lead to the more small-time colleagues being excluded, while the larger will be more or less identical to what already exists.
So, how does that solve the problem?
They could release enough information publicly to allow one "skilled in the art" to create countermeasures, without providing a step-by-step recipe that even the relatively unskilled could use to create new exploits.
Some of the virus/worm authors are quite skilled in the "art". And most script kiddies wouldn't know what to do even with a step-by-step recipe, they rely on others to make point and shoot kits.
It would cut down somewhat on attacks, but could also slow the response of those trying to fix the problem.
Of course, it's in security companies' interests that security-naive people should get hurt, making them less security-naive and more likely to buy products or services from companies such as the one of which Smith is CTO. I sure am glad that I'm not in a business where making sure people get hurt is part of the business plan.
Wow, straw man!
Do you really advocate dumbing down everyone because of all the clueless W2K and RedHat users who never install any security updates? Because they won't install security updates, no one gets to know about the vulnerability well enough to determine if they are affected. Which could lead to social engineering attacks where a malicious individual releases a limited-disclosure bulletin that says to take measures that will actually increase vulnerability, and no one can verify if it works or not (similar attacks have already been attempted, this isn't completely hypothetical).
And, of course, in your entire post you mention nowhere that good practice is to alert the vendor before releasing to the public whenever possible. Instead, you imply that "full disclosure" doesn't give the vendor any chance to close the security holes.
Re:More info (Score:2)
For someone who just accused me of constructing strawmen, you were pretty quick to whip out one of your own. Ask the Gnutella or FreeNet folks whether distribution of information requires a central directory. Ask the PGP folks whether trust requires a central authority. More decentralized means of distribution can (and do) work rather well for security information.
Wow, strawman #2 already. No, I do not advocate that at all. In fact, my point in one of my posts to this thread is that those ignorant W2K/RedHat users won't apply the patches anyway, even with full disclosure.
Perhaps because, just five minutes prior to the post you saw, I had congratulated someone else in another post for reiterating that very point. I don't like to repeat myself, and generally only do so when someone I'm talking to seems particularly thick.
Yes, folks, we have a straw-man hat trick! No, there was no such implication in my post; you made that up.
Re:More info (Score:2)
Can you see me quaking? No, didn't think so.
Seriously, though, if I pissed you off maybe I made you think as well, and that's a good thing even if we happen to disagree.
Yes, those particular things are. I was offering them as counterexamples to the assumption that having a list implies a centralized authority, not necessarily as examples of "how things should be" with respect to security information. Your own proposal seems headed in the right direction, except that you seem to lack confidence in the "web of trust" idea. I think it scales better than you seem to think it does.
Here's a question: how open is the black-hat community? Do they just share any information with anyone? Are they "free" in that sense? No, not very. In general, to get information you have to pay for it in the form of other information of approximately equal value - what the P2P folks are calling an "economic" model of trust. Somehow, though, everyone arguing with me seems to think this not-very-free community is doing an excellent job of disseminating information so that those who can use it have it. The white hats could do no worse than the black hats by adopting the same less-than-full-disclosure model; given their superior resources and the lack of certain other limitations, they should even be able to do better.
me preacher, you choir... (Score:2, Redundant)
seriously.. maybe a stepped grace-period would be an idea?
step 1: Bug is found, creator is notified
step 2: 2 weeks later. if bug is fixed, go to step 3. disclose existence of a bug, not much details yet
step 3: full disclosure
just shooting off the hip here...
//rdj
Re:me preacher, you choir... (Score:2)
I have read several emails from people who promise full disclosure shortly, but who are giving the vendor a chance to review their code because they acknowledged the problem.
Re:me preacher, you choir... (Score:2)
true. However, except for the colour of their hats I think there is no significant difference between white and black hat hackers. If a white hat found a hole, there's a decent chance a black hat found it too, or will find it, even without disclosure. In that case partial disclosure may alert people that there may be a problem, and some extra caution/logchecking/tcpdumping might be in order...
//rdj
This is just a warm up, boys and girls... (Score:2)
If the system is known to have a problem with buffer overflows, why not test it yourself before someone else exploits the hole? Why not test ALL of the software this way?
This, "the most expensive computer virus in the history of the Internet" is a mere wake up call. Someone, somewhere, is going to learn from this, and other sources, and do something nastier and far more damaging. It will be more subtle, harder to detect, and will slowly take over all versions of windows, or it might be a blinding flash, splitting up the work to take over everything, hooking in multiple places, distributing its attack methods to make it harder to get a list of ALL of it's methods.
Things are still very insecure, we're all going to get hacked, it's just a question of when, how we respond, and what we learn, in the end.
I hope everyone has a nice, complete, MD5 hash/Binary compare checked backup of their files.
--Mike--
Re:This is just a warm up, boys and girls... (Score:2)
Re:This is just a warm up, boys and girls... (Score:2)
Well, Intel Pentium processors do support tagging memory as either code or data, with many permutions designed to allow for a properly secure OS to be built on top of it. So, I can't blame Intel on this one.
The OS should know better than to allow code to be self modifying, and it should abort anything that attempts to do so.
--Mike--
We must have full disclosure! (Score:2)
When the vulnerability applies to Windows:
Well Put, But. (Score:2, Informative)
Maybe eEye tried to tell them but they didn't listen?
Re:Well Put, But. (Score:2)
They also accept general concerns/suggestions/complaints through email (XP, for example, has an email address for concerns and suggestions).
Security@microsoft.com (Score:2)
Oh, and it was also a Netscape problem, and they ignored me as well.
Re:Security@microsoft.com (Score:2)
Re:Well Put, But. (Score:2)
Unfortunately, this method of drumming up business is standard practice in the security industry. It's in their own financial interest to keep security problems in people's minds. At the very least, this means that they will hype any threat to the highest possible level, just like the Y2K consultants did. Too often, actually contributing to the creation of new exploits is also part of the business model. Don't believe me? How do you think these folks know so much about how exploits work? Do you suppose that just maybe it's because they either spent time working the other side of the street, or that they're in contact with people who do? Might there be just the slimmest possibility that they still engage in such activity themselves, when they have both the skills and the (financial) motive to do so? OF COURSE THEY DO!. There are probably a few true white hats out there, but the majority - the vast majority of those who make money off it - are dark grey at best.
Don't blame Eeye (Score:2)
Full disclosure is definitely a good thing. Not only it pressures software vendors to release a patch, but it also help other programmers to understand good programming practices.
And *this* help every application on the security road.
Today, more and more software is designed with security in mind. 10 years ago, nobody was careful about this.
How could programmers know how to code secure software without review full disclosures of other software ?
Yes, there are "secure programming" mailing-lists, but they'd be abstract and far from the reality without any knowledge of real cases.
My problem with this. (Score:4, Insightful)
My only problem with this is that Microsoft lacks the motivation to fix the hole if no one else knows what it is.
Sure, people can point out there "some hole" exists. But unless the details are made public, Microsoft (or whoever)isn't motivated to fix it, and no one can check up after them to see if it has been fixed. We have to take their word for it. That would make me very nervous.
I wouldn't be against a "grace period" where the vendor is told of the hole so that a patch can be ready when it is announced, but it would need a time limit on it to keep it from being delayed forever.
--Ty
paranoia, dcma... (Score:1)
Microsoft has enough lawyers that seem to have plenty of free time on their hands, that my bet would be that they'd try to shut you down, and prevent you from making the promised disclosure.
These tactics scare the bejesus outta me, as far as their implications for security information distribution between professionals.
The crackers and virii folk have, or will have, the information sooner than the admins. Anything that further delays getting me information about potential threats to my network, I regard as irresponsible.
Re:paranoia, dcma... (Score:2)
Here's the real question; do you, as an administrator, want to be able to stop break-ins, or do you want to just hide in the herd, take your chances and hope that you aren't victimized? Your answer will tell whether you would rather have an excuse or a solution. Having solutions depends on disclosure; having excuses depends on lack of information.
Re:My problem with this. (Score:2)
Do you have proof for this claim? Or are you just talking out of your ass?
That "grace period" is how it's done today. Microsoft released their security bulletin on the same day as eEye released their disclosure information. Because they had been working together on the issue for quite some time beforehand.
Re:My problem with this. (Score:2)
I don't think I mentioned eEye's behavior at all. My response was to the e-mail referenced in the posting. I didn't mention how anyone does it now, only how I think it should be handled. I don' really have any knowledge of "how it's done today." Perhaps you can let us know what you are baseing that statment on so the rest of us will know as well.
--Ty
Re:My problem with this. (Score:4, Insightful)
I don't generally defend MS but what you said is simply not true. If you compare MS to other companies, I think they have a descent response time and take ALL issues quite seriously. Yes their product sucks but as far as I know (and I read Bugtraq religiously), they are usually not too far behind.
The first one who replies to my post by mentionning Code Red doesn't know what he is talking about. MS released a patch for Code Red weeks before the worm spread like wild fire. Get your story straight.
Every now and then there is this discussion on bugtraq about full disclosure. It started last week by someone mentionning the cost of the worm and its variants. How eEye could have done better and stuff
Let me make myself clear: Full disclosure is mandatory!. Without it, we are all screwed.
Please flame accordingly.
Re:My problem with this. (Score:2, Informative)
Re:My problem with this. (Score:2, Informative)
Acknowledgments
Microsoft thanks the following people for working with us to protect customers:
John Waters of Deloitte and Touche for reporting the MIME type denial of service vulnerability.
The NSFocus Security Team (http://www.nsfocus.com) for reporting the SSI privilege elevation vulnerability.
Oded Horovitz of Entercept(TM) Security Technologies (http://www.entercept.com) for reporting the system file listing privilege elevation vulnerability.
Re:My problem with this. (Score:2, Insightful)
Full disclosure is a good thing - in theory. I'm all for releasing the details of an exploit, but in a partial, then full manner. I'm not sure if this is what happens in all cases, but many white-hats operate in the following manner:
Unfortunately the one thing that this system cannot do is convince lazy or inept sysadmins/users to patch their systems.
I can't comprehend the mentality that afflicts these people.
Watch for the thousands of phone calls that the bank would get....
Compare with:
Barely half the people out there bother.
Yes, it might be possible to say that its the vendor's fault that the software isn't secure - after all all software should be perfect (yeap, I know Microsoft are really taking the piss on this one - closed source and poor security - and you have to pay for the honour to be a security hazard). It's also possible to say that it's the cracker or virus/worm writers fault for attacking systems. It's especially easy to say that it's the sysadmin (if they deserve the title) that can't patch a system up.
However, it's more of a combination of all three, and probably more reasons than I can think of. Everyone needs to pull their fingers out of their behind, and do their own part.
The problems in the system need to ironed out before it hits the shelves, good coding, code audits and sensible defaults to name but a few things the vendor needs to do - not just for security, but stability and maintainability. The admin/users need to learn their own systems at least enough to not screw everyone over if something does go wrong, and prevent things going wrong in the first place. The virus writers and crackers?
Hell, I can't think of everything.
Re:My problem with this. (Score:4, Informative)
It depends on the OS, the severity, the size of the fix, and how easy it is to block in another way.
For an open source OS, with a simple fix, where you can look at it and be reasonably sure the patch is secure, go for it if the bug is serious.
For a closed-source OS, or a really complex patch, don't apply it until you've seen reports from people who do (give it a month or two) unless it's a huge bug and you can't block it with another method.
For example, some bugs would be port 139 overflows. Don't just patch Windows, firewall port 139 from the outside world.
Another example, Code Red... Use a filtering proxy/firewall to dump any port-80 traffic that requests "default.ida"
Keep in mind that patches aren't tested very well, simply because of the urgency of releasing them. I wouldn't trust an alpha-kernel on my servers, why would I try a webserver with an alpha patch?
This is especially important if you're working with a Microsoft system. They'd got a lot of history of releasing buggy service packs that can't be properly rolled back, etc.
THis is why full-disclosure is *essential*. Compotent admins can implement their own fixes while they wait for something official (and tested) to be developed.
Imagine if Code Red was describes only as a buffer overflow... It wouldn't be possible to protect yourself from it.
Richard Smith, author of that letter (Score:2)
So, anyway, Richard Smith, this supposed privacy guru, it turns out happily uses this very same toll system! Despite its obvious privacy problems, he can't be bothered to wait a few seconds a day to pay cash. Not only that, but he shows up in public forums letting us all know how he feels.
Now, on a security list (he is CTO after all, woo woo!) he is now praising Microsoft's security policies...?
Ya know, Stallman can be incredibly annoying. But, when it comes to a public figure like this, his "purity" is somewhat reassuring. I think Richard Smith is probably a nice and smart guy, and he's entitled to his opinion... but CTO of the Privacy Foundation is also how he's "entitled" and if you ask me, he loses credibility all the time. So what? Well, he dimishes the causes of privacy and security as he sinks.
Full disclosure is the solution not the problem... (Score:5, Insightful)
Vendors all around view a vulnerability that has been publically exposed as a much higher priority than those that have not been exposed. Over and over again, history has shown that a vendor will try to cover up a vulnerability if it is not exposed, to avoid bad publicity. (No, this is not specific to Microsoft, all vendors hate bad publicity). If an exploit is publically available for a particular vulnerability, it also changes the method in which the vendor advertises the patch, thus increasing the people who know about it and install the patch.
Full disclosure provides many useful functions, including the ability to test for vulnerabilities in their own systems. It gives them the abliity to verify that the system has been properly secured after a work-around has been implemented.
Partial disclosure, which is often suggested, is no different htan full disclosure, except it may give the admins a false sense of security. With partial disclosure, the existence of a bug is disclosed to the public - but the details are not. Sad enough, once the cat is out of the bag, it's only a day or two before someone else can figure out the exploit. Once the vendor releases a patch, it is trivial to do binary diffs on the provided updates and figure out the details of creating the exploit. In fact there are tools that help to automate this already in existence today.
The sad thing about code red is this: Patches have been available for quit a while now. Yet systems are still getting hit. The widespread affect of Code Red is the ONLY thing that will get the admins who never patch their systems to potentially pay attention to whats going on.
Full Disclosure is not the problem. If one person has found the vulnerabilities, there are generally more who have found them and are actively exploiting it already. To think otherwise is to seriously underestimate the cracker community.
--
Twivel
Sometimes you need to bring out the sledghammer... (Score:2, Insightful)
Re:Sometimes you need to bring out the sledghammer (Score:3, Insightful)
Unrepentant bullshit. Microsoft is very good at getting fixes out. Some bug-hunters expect a 2 hour turnaround time on their reports before "forcing MS to fix this by going public". Eeye even says that MS was quick in putting out a patch to fix the hole. The vast majority of bug hunters that actually take the time to work WITH Microsoft say that MS is quick in getting patches developed and in the hands of administrators, where they aren't applied (but that's a different story). Where's the sledgehammer? Can you support your claim with any evidence of any kind, or is this merely Yet Another Case of Uninformed Microsoft Bashing?
Re:Sometimes you need to bring out the sledghammer (Score:3, Insightful)
Re:Sometimes you need to bring out the sledghammer (Score:2)
I've always been a firm believer against virus-watchers who release full exploits to the general public. It simply isn't necessary. The same results (warning Microsoft) could have been done without causing such a hyped panic.
It's akin to not only delivering a news story on a serial thief who's robbed many homes, but giving full and agonizing details on how he broke in to the general public (and in the process, other criminals). There is NO ONE who can argue that the only people who *need* that information, for the safety of others, is the police.
Re:Sometimes you need to bring out the sledghammer (Score:3, Insightful)
If the only action taken is to warn Microsoft, someone will discover the problem elsewhere, eventually. In the meantime, Microsoft is unlikely to take the complaint seriously - after all, the damage is only "theoretical", right? Now, it's a fact that there are a lot of inattentive, lazy sysadmins out there, many of whom are running IIS, and that's why they haven't all applied the patch yet - but at least with this in the news, it's harder for them to avoid it. How many would bother to apply the patch if there weren't any obvious benefit to doing so? Many might choose not to disturb a working installation.
Personally, I think that the only software that can ever hope to be secure in the real world is built like a tank. Use a language or library that makes it impossible to have buffer overflows; assign permissions to everything and never give out more than you need to; etc. But in an environment where exploits are only theoretical and only announced to the entity responsible for fixing them, you have to admit that companies like Microsoft will be very slow to fix them.
Lazy Sysadmins, eh? (Score:1)
Re:Lazy Sysadmins, eh? (Score:1)
To be fair, do you blame the Open Source community for writing buggy software?
Re:Sometimes you need to bring out the sledghammer (Score:2)
I've heard this argument an awful lot, and no one in the open source community (who seems to want to use this argument) has ever been able to bring any factual instances to light. Has there ever really been a Microsoft vulnerability reported to MS where the company replied "That damage is only theoretical. We don't feel obliged to fix it." I mean, a real world story.
And keep in mind, MS probably receives dozens of fake security exploits a day by open source/hacking zealots (and I include myself wholeheartedly in that group). You can only expend so much money on determining what exploits are "real".
Re:Sometimes you need to bring out the sledghammer (Score:1)
I admit I don't personally have such a story. However, you have to admit that this result is unlikely - why would Microsoft announce that they're not going to fix the problem? Much more likely is just silently ignoring it.
Doesn't this go against the other point you were making, though? In a way, the writing of exploits prevents Microsoft's having to spend so much money on determining which ones are "real" - if there's an exploit publicly available, it's definitely real.
Re:Sometimes you need to bring out the sledghammer (Score:2)
If you read BugTraq, it seems like half the vulnerability postings say "I notified $VENDOR four weeks ago, and they failed to respond/said it's a feature/said it's not worth fixing.
One vendor revved their firmware three times while ignoring a huge vulnerability that had been reported to them. Finally the researcher posted it to BugTraq. And this is not exceptional - I remember it only because I read it recently.
If you look at Microsoft's vulnerability announcements, you can see the evolution. They used to put a little PR spin on the vulnerability, claiming it would take 'special software' to exploit it, or otherwise implying that an exploit was unlikely. They seem to be reducing these attempts at damage control, since they realize that everyone realizes that if it can be done, it can be automated.
I mention these little PR driblets because they demonstrate that the advisories are issued under duress - Microsoft feels that it would be better not to issue an advisory. And they acknowledge the duress honestly - part of adapting to the full disclosure world - by giving credit to the researcher who reported the vulnerability. That's how they reward people for telling them first.
Re:Sometimes you need to bring out the sledghammer (Score:2, Insightful)
Sure - Melissa et al. For years people have been bitching about the security of executing email attachments in Microsoft Outlook, and the amount of control over the machine that executed attachments have (no sandbox). The initial response from Microsoft was "that's a feature, not a bug", and even though there have been some steps in the right direction, the problem still exists and still breaks out into an epidemic from time-to-time.
I suppose that you could argue that clicking on attachments is more social engineering, but on the other hand Microsoft products are specifically sold as "easy to use". If it's so easy to use, why do users still fall prey to simplistic social attacks?
I'll admit that full disclosure hasn't even helped in this case, though - Microsoft and the whole world has known about the problem for a long time and it's still with us.
My heart bleeds for them - "we have so many security exploits that we can't even take the time to figure out which of them are real". Hint: do it right the first time, and then hire yourself some crackers to do this poking and prodding for holes that until now has been reserved mostly for the black hat community. I don't have much pity for Microsoft's inability to fix their security problems - they dug themselves into that hole by valuing time-to-market, embracing-and-extending, and pseudo-usability more highly than security and actual functionality.
Re:Sometimes you need to bring out the sledghammer (Score:2)
You've completely missed the point. The problem is not that "there are too many security exploits" but that many are simply tomfoolery brought about by the community. Do you have any idea how many security exploits MS receives that have no basis in reality, but are only there to get their goat? And it's not like the security people should have to pay for the mistakes of the company (monopolistic practices, etc.)
Re:Sometimes you need to bring out the sledghammer (Score:3, Interesting)
Re:Sometimes you need to bring out the sledghammer (Score:2)
These agencies report thousands of bugs. Before Code Red struck, how were they to know that this particular one would turn out to be big?
Re:Sometimes you need to bring out the sledghammer (Score:2)
They did not release any code.
Re:Sometimes you need to bring out the sledghammer (Score:3, Insightful)
If all I had to go one was "somethings wrong with the door" what the hell are we supposed to do about it?
Sure other criminals might now about it now. But I've already fixed the problem, or I have a contingency shotgun in place, etc.
Re:Sometimes you need to bring out the sledghammer (Score:2)
I'm your sledgehammer! (Score:2)
Hint: June 18, 2001
Now go check to see what day Microsoft released their disclosure?
Hint: June 18, 2001
What was this sledgehammer?
NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO (Score:2)
Too bad we cant mod the stories down. (Score:2, Informative)
Slashdot is trolling us--successfully, it looks like. Too bad we can't mod the story itself down!
This debate is hashed through time and time again, and solved--time and time again. Anyone who argues against full disclosure has never been a system admin or been deep enough into someone else's exploit-ridden code to feel the pain.
Exploit disclosures are like the work-saving packages collections of *BSD. Someone else has done the work for you. For those who are in the know this means we don't have to fire up our own copy of IDA or grep spaghetti source code to figure out what the heck is going on.
Why should I worry about the lower forms of system admin life and hold their ignorant hands when my more important systems and the systems of my clients are at stake? If you can't stand the heat, bloody well get out the kitchen and leave the work for someone who knows what they're doing.
Past history can tell (Score:5, Informative)
That is based on the assumption that Microsoft would take immediate action for the benefit of the society.
Ok, take a look at this [theregister.co.uk]:
The update, which amounts to a point release for both IIS 4 and IIS 5, also addresses five previously undisclosed vulnerabilities with IIS, which could result in either denial of service or privilege elevation.
Five undisclosed vulnerabilities! Smart crackers might have enjoyed exploiting them for months!
When would Microsoft disclosed fixes of those vulnerabilities? Next Service Patch? Does it mean they wouldn't be fixed if not for this CR instance?
How could you rely on a group of people whose actions are unaccountable?
Ok, you can mod me troll now if you don't like I speak ill of Microsoft.
Re:Past history can tell (Score:2)
Is it not also possible that they discovered these vulnerabilities in their own testing, and decided to fix them before an exploit appeared? Would you fault them for that? I know how the herd likes to jump to the most negative possible conclusions regarding everything MS does, but simple honesty requires that we at least mention and consider alternative explanations.
Re:Past history can tell (Score:2)
It's possible, but we all see that they prefer to hide them til next service patch release. If not for this CR instance we wouldn't know. They've 'hot-fix'(in-between service patches) system, but we don't see any hot-fix for these vulnerabilities.
That gives crackers plenty time to rob general public blind. That's what we are worrying about. I've heard somewhere that Russians mafias are constantly cracking US companies' webservers; they probably wouldn't publicize the vulnerability like eeyes did if they discovered it in the first place.
We do not jump into the most negative conclusions for nothing.
When I got the release off of Bugtraq... (Score:2)
Boss: Okay. You do that.
Me: Clickity Clickity Click...
~ ONE MONTH LATER ~
Boss: Oh god! The world's going to end! Our profits are going to dry up. Code Red is trying to rape my daughters...
Me: Relax, buddy! We've got it taken care of. Remember we disabled indexing on our IIS box? I also installed the patch from M$ just as soon as it came out.
Boss: Oh... Okay.
~ ONE WEEK LATER ~
Boss: Oh god! The world's going to end! Our profits are going to dry up. Code Red II tried to assault me behind the bar last night.
Me: Relax, buddy! Remember, we disabled indexing on our IIS box and installed the patch. Remember?
Boss: We did that?
The lesson here isthat EEye was perfectly responsible in releasing the information, and did so with the knowledge of Microsoft, so that M$ could release a patch in time to fix a hole that could have halted governments. As bad as it was, it could have been worse.
on Responsible Disclosure (Score:2, Insightful)
This is the process that is used in the Virus community today, and it's been working well.
One of the points Russ made was that eEye could have discussed this issue on the mailing list before issuing a press release. In addition to feedback having been given by Microsoft, there would have been peer review. Additional information would have resulted, clarifications on impact and so forth.
Then a final press release could have went out, giving eEye full credit for finding the issue, but providing a wealth of useful information to the end-user/admin type folks.
Russ also raised a point about eEye's motivation. Why do they insist on not only full disclosure, but also releasing exploit code? Again he raises a good point, and I think it's quite clear.
eEye is in business to sell a product which supposedly protects you against these types of attacks before they happen. So it is in there best interests that an attack is quickly released and spreads rapidly, thus generating mass hysteria. Only with such hysteria can they generate traffic to their site and obtain orders for their product.
Re:on Responsible Disclosure (Score:2)
This question sort of assumes that if eEye doesn't release the exploit code, the code won't be written. On the contrary - exploit code is often referred to as proof of exploit code and is intended to show that the vulnerability indeed exists. Otherwise, vendors have a habit of stonewalling: "that vulerability is entirely theoretical."
eEye's response (Score:2, Informative)
http://lwn.net/2001/0816/a/eeye.php3
clearly points out that they provided *NOTHING* to the virus writer, and in fact the virus writer used another virus as a template. The criticism in this
case is quite unfounded.
Bruce Scheier on full disclosure (Score:2, Informative)
What's the world coming to when everyone's favorite security guru starts blaming the messenger, too?
Re:Bruce Scheier on full disclosure (Score:1)
You can argue that eEye did the right thing by publicizing this vulnerability, but I personally am getting a little tired of them adding weapons to hackers' arsenals.
I could equally argue that security is a choice; would you use or endorse a product that has had a bad rep with regards to security and the necessary actions required to correct security flaws?
Considering that the vulnerability that Code Red exploits could have been fixed months before, I as a system administrator myself have to ask the following...
If you were to survey those affected by Code Red, you would probably get a lot of "NO" responses.
These are probably the same individuals or companies that don't take security seriously. Security is a lot more than just applying patches whenever vulnerabilities are found. It should influence how your network is designed, how the services are partitioned on that network, and what information is allowed to travel in and out of your network, etc.
<plug for egress filtering>
If people actually took the time to look at their network architectures, they probably would be hard pressed to find a valid reason as to why their web servers would need to initiate a connection to another web server outside of their network.
</plug for egress filtering>
I therefore submit that full disclosure of the IIS vulnerability had no real influence as to the spread of Code Red. This problem could have been avoided by applying a fix, but some parties made a choice to do nothing. The spread of Code Red is a realized consequence of that choice.
Old News (Score:3, Informative)
Second, eEye already pointed out that the author of the email actually knew nothing about the issue, as the exploit had been used months before they posted their description.
Here's part of a response to that email, from an employee of eEye:
Lets get the facts straight. CodeRed is based off of another worm that was written for a .htr ISAPI buffer overflow. CodeRed is an almost identical copy of the .htr worm. A worm which was released back in April. A worm which exploited an UNPUBLISHED vulnerability within IIS which was silently patched by Microsoft without notification to anyone. Therefore IDS vendors never had a signature and the .htr worm went unnoticed. To bad a security company had not found the flaw, then there would have been details, signatures made, and IDS systems would have detected the first instance of CodeRed back in April.
Okay, so the guy who wrote the letter blaming eEye was a fool, who spouted off without possession of any facts in the case. But, it looks like he has a lot of company on slashdot. Maybe, they ought to rename the site 'slashdork.'
mp
The problem is... (Score:2)
This is why "security through obscurity" is a paradox inside an absurdity. From the moment a piece of software is available, ALL information contained therein is ALSO available. And that includes information on buffer overflows, other exploits, etc.
Since releasing the information doesn't actually add any information to what is already out there (it merely changes it into a more "readable" form), it makes no sense to question the release of such information.
It is perfectly true that, on the release of information, those who are less willing to examine the code can also find the exploits, but if they haven't the initiative to do the work themselves, they really shouldn't be considered the problem.
The problem remains those who DO do the work themselves, who DO find, identify, and utilize exploits that are NEVER published. Those people are the real danger.
A skript-kiddie downloading a million credit card numbers is an irritant. They just cause a million people to have to put stops out, and get fresh cards. No big deal. Cards get lost or compromised, then stopped, all the time. The problem will be spotted quickly, and the most severe damage will be to a few people's egos.
Place a black-hat in the same situation. The exploit is unknown, unpublished and goes unfixed. Minor "glitches" go unreported and unnoticed. The few complaints are passed off as stupid customers or bank errors. By the time the enormity of what has happened is detected - assuming it ever is - the perp might well be half-way to -owning- a South American country. Chances are, though, that it'll never be detected.
From what I have heard, US banks lose something like $150,000,000,000 a year to computer crackers. That's a LOT. Let's be honest, here. Given that kind of write-off, can they really lose any more money by being open?
Regular companies and corporations are so big and so complex, that your average black-hat could walk in with an elephant and walk out with the payroll mainframe, without anyone noticing for several weeks.
Attempting to keep intruders out by not telling them things is like trying to keep out a burglar by not having a front gate. What good is that going to do? If the burglar can see a door or a window, then the lack of a gate obscures exactly nothing.
You can achieve -some- security through having guards, monitors, etc. The military works this way, and it's generally pretty effective. IMHO, though, working on the holes is even better. And you can't do that, if you're busy pretending that there aren't any there to work on.
UCITA's effect on bug-reporting (Score:3, Interesting)
fluffy bunny (Score:2)
Remeber sourceforge getting cracked a couple of months back? Apparantly, the guy who did this spoke to securityfocus.com [securityfocus.com] about th attack. In this article [securityfocus.com] he says:
"i hack, dot slash or whatever you might want to call it, i do not write my own exploits, i use other people's stuff, and no im not anti-open source, i am however anti-sec. i support the anti-disclosure movement among the computer and network security communities,"
Furthermore, the cracker said he works as a contractor in the field of security, and perhaps it is the ease of cracking so many sites using nothing but published exploits that makes him support the "anti-disclosure movement."
Although I am personally not against full- or partial- disclosure per se, I do think the anti-disclosure movement has a valid point. There does seem to have been a huge increase in cracking activities recently, and although the script-kiddie phenomenon is at least partially due to the rise of the internet/home computer (i.e. more kids with cheap pc's in their bedrooms), I do think that the current fashion for open-disclosure means that security holes spread into the black-hat community faster than most sysadmins apply patches.
On the other hand, if we go back to the anti-disclosure, it will be like pre 90s. The white hats will know one set of holes. The black hats will know a differnet (far more limited) set of security holes. This scenario obviously poses a whole set of different problems.
Re:fluffy bunny (Score:1)
Car analogy (or not) (Score:2)
Wouldn't cars be safer if there were not public reporting of defects, if safety information went only to the car makers, so they could fix the problem?
I realize this anology has a flaw, being passive defects vs giving attackers key information.
So how about information about the security of your bank. For the moment, let's leave IT out of it. As a consumer, I'd like to know if a given bank is sloppy about their security, because I can vote with my money, and where I put it. A key part of a free market is an informed consumer, and withholding information from the consumer is tampering with the marketplace. (gag terms in software licenses, anyone?)
I can certainly agree with giving the software developer advance notice, but the key word is 'advance', not exclusive or secret.
To take it back to cars, isn't part of just about any safety-related lawsuit a gag order? Is critical information being withheld that would help my decide on a safer car to put my family in?
Sex Education (Score:5, Insightful)
Let's say that eEye just released an advisory that said there was an overflow in the processing of default.ida. No more information than that. It would not take a skilled hacker very long to find the buffer overflow, and then exploit it. It doesn't matter if eEye says little or everything. The skilled hackers are going to find the hole themselves.
I believe the current system works as well as it can and should. When somebody finds a bug, they usually report it to the company, let them play with it for a week or so, work up a patch or workaround, and then it is posted publically on Bugtraq or some such list. Then somebody might write a proof-of-concept or an exploit, but will leave one part intentionally broken here or there so only people that can read the code and understand what is going on can fix it and use it. Bugtraq keeps companies accountable for their products security. Without it, there would definitely not be the same diligence at most companies responding to security concerns.
If you are going to teach sex, don't just say 'save it for marriage'; teach about condoms so people aren't caught with their pants down.
Re:Sex Education (Score:1)
Why do people who preach "save it for marriage" assume that married people have no need for information about birth control?
Yes, this was offtopic to the main thread, but not offtopic to the post I was replying to.
Re:Sex Education (Score:2)
It seems a little ridiculous that in 2001 we are still seeing buffer overflows in newly-released software. I mean, has M$ been hiding in a cave, or what? I suppose it is unreasonable for me to expect that all programmers know how to avoid what is now a textbook security risk. The real point is that bug should have never made it outside the castle walls in Redmond.
What Mr. RMS really ought to be bitching about is M$ failure to streamline patching systems running their products. All that windowsupdate.microsoft.com does is automate the installation of those Qxxxxxx.exe "patch" programs. Why don't they update that site more regularly? Can't they afford to take 5-10 of their 20000+ guys to take care of this? Anyone offended by the loss of service due to Code Red et. al. might want to consider a class-action lawsuit against Microsoft for negligence.
Re:Sex Education (Score:1)
Re:Sex Education (Score:3, Insightful)
Re:If you don't embarass Microsoft they won't fix (Score:2, Insightful)
Simple example, how many people use time functions in C without realizeing they are dealing with a very limited function which croaks in about 35 plus years. Use one of those functions to calculate a date beyond the range and who knows what you get for data. This bogus data then gets pushed into another function which is not checking because only valid data should come back from a built in function...
In many ways we get tripped up using code/languages which were designed under past limitations (8, 16, and 32 bit). The libraries need some cleaning/correcting, and code in general needs some serious error checking. Error checking along the lines of, "what happens when procedure X receives the impossible?"
Re:There's no such word as "virii" (Score:1)
Also, "affective" actually did leave some ambiguity since you used "effective" correctly in the same post and since using "virii" can evoke some emotions (as would be implied by "affective").
OMG (Score:2)
Post one: Here's a little lesson for you in a form that may be easier
> to remember. (Sung to the tune of Mary had a little lamb).
To use a parenthetical, include it before the ending punctuation. Don't make it a new sentence.
Post two: You see the purpose of words is to communicate an idea.
The "you see" part is a separative, and should therefore be separated from the rest of the sentence by a comma.
words have no inate meaning
Ahem.
Post three: Shaekspeare
Who? The spelling of the bard's name isn't that hard to check.
Post four: Yes, you are correct, that is what I meant.
While creating three sentences with one period may be economical, it's not correct.
Come on, folks, let's pull it together a little better next time.
Virg
I tried 'partial disclosure', it failed! (Score:2)
I found a serious design flaw and major security vulnerabilities in their systems. I attempted to notify the company, and got no response. I posted 'Partial Disclosure' to a security mailing list with just an outline of the problem and notes on where they had weak security, but I did not post details to exploit them.
The company did not respond.
Three months later, another person independently found the same issues, confirmed with me that these were the same holes I had described in my vague message, then he posted 'Full Disclosure' to the same mailing list.
This time the vendor responded, and toke action to notify users and fix the problem, nearly six months after I first notified them.
Re:Compromise (Score:2)
That might have been me. Or it might not. In any case, it's a belief I subscribe to, and one which I have voiced here on at least one occasion.
I think what a lot of people are missing with all the talk about "pressure" is that there are basically two types of sysadmins: those that are security-conscious and those that are not. Those that are not are unlikely to apply a patch even when an exploit is known and circulating, unless and until it actually hits them. Look at all the people who failed to apply the IIS patch even after CRv1 went around, and got hit by CRv2. No amount of pressure, on or from the OS vendor, is going to affect that. By contrast, those who are security conscious would take protective measures even without full disclosure just to be safe. If there's no patch available from the vendor, maybe someone else can come up with a stopgap. For example, eEye could have described an adequate set of protective measures for CR without providing the recipe for CRv2; anyone who thinks their reason for not taking that approach is anything but venal must be hopelessly naive. If nobody has a patch or stopgap, the only people who benefit from full disclosure are the bad guys.
Note that in no case of those above does "pressure" make a difference. Also note that in no case does full disclosure help the good guys. The people who work at security companies know each other. They can share the details among themselves discreetly if they choose to, so that all of the firewall/IDS/etc. people can keep up in the arms race. The only difference between that and full disclosure is that full disclosure also makes it more likely that new exploits will appear before the vendors all catch up.
Do you think it's just coincidence that it's good for eEye's business for new versions of CR to appear when they're prepared for it? If so, consider the text of this link from their front page:
Re:Compromise (Score:2)
If all the security experts on BugTraq retreated into the old boys' network and denied me the benefit of their knowledge, I think that new experts would spring up to fill their place. The old experts would have ceased to be important. Actually, the main benefit I get from BugTraq is a feel for security in the software I'm writing. Frequently some tiny technical detail from a BugTraq message will come back to me when we're designing a piece of code, and I'll realize we're creating a subtle hole. Anyhow, your desire to squelch the actual exploit seems to swim against the current of the internet. If the guy who finds the bug stops short of a full exploit, someone else will naturally oblige by filling the gap.
Re:Compromise (Score:2)
OK, you're talking about a situation where you *knew it didn't apply*. That's a little different than *not knowing it does apply*. Maybe you'd treat the two scenarios the same way, but many wouldn't.
Also, I'd like to point out that I'm all for providing as much information as possible about the conditions that make one vulnerable, and about countermeasures. What I oppose is description not only of conditions and countermeasures, not only of the vulnerability's basic nature ("unchecked buffer length in the xxx-checking part of the yyy-module"), but of exactly how an exploit works or could work. If I, as a user, knew that the exploit involved such-and-such a buffer overflow, that it only mattered if I used feature X and that I could protect myself by doing Y, that last sort of information provides no additional value. It would provide additional value to security professionals as a test case or data point, which is why they should get it via limited disclosure, but it's not useful to me. The only people who derive significant value from the broadcasting of information about exploit mechanics - as opposed to limited disclosure or broadcasting of risk/countermeasure information - are the exploit programmers.
I already went over that ground in another sub-thread. Check my user page if you're really interested.
I'm well aware that my attitude regarding disclosure runs counter to the herd/leech mentality.
...and if the information had been disseminated efficiently among members of the "defense" before giving it to the "offense" the exploit would be obsolete before it was created. Let's say that I find a vulnerability. I can do one of two things:
Clearly it's a race. I know that, even if I try to keep the information among the good guys, a leak will nonetheless occur sooner or later. However, I can try to make sure that as many good guys as possible are already working on countermeasures before the bad guys get the information and start working on exploits. That edge, even if it's only a day or two, might make millions of dollars' worth of difference in the damage (if any) the exploit does when it appears. Are you saying that we should sacrifice those people, just so you can have the warm fuzzy feeling of having additional information that in fact provides you with no additional protection? I'm sure you don't like to think of it that way, but it seems to me that your arguments - and those of others - favoring full disclosure are based more in a personal desire to feel like an expert than in overall social/economic utility.