Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet

On The Costs of Full Security Disclosure 269

sasha328 writes "I found reference to this email on the LWN.NET site which was sent to the SecurityFocus mailing list. It asks a very valid question about how much you can disclose before malicious virii can be possible."
This discussion has been archived. No new comments can be posted.

On The Costs of Full Security Disclosure

Comments Filter:
  • by friscolr ( 124774 ) on Thursday August 16, 2001 @10:59AM (#2109740) Homepage
    Releasing Security updates is tantamount to full disclosure - any blackhat with a bit of knowhow and enough time will be able to reverse-engineer the bug (no DMCA regs, please, we're talking about blackhats here).

    So, since releasing a security patch is equivalent to giving the blackhats full disclosure, no software should ever be patched again. Instead it should be understood that anytime anyone finds a security hole, they need to be quiet forever.

  • I'm all in favor of full, detailed exposure of exploits - how they're done, why they're possible, and possible steps to fix them.

    Just because the exploit only hits MS systems doesn't mean that ONLY MS and "blackhats" should know the details. The more people that know the details of HOW these exploits are possible, the better - as these people will not only put more pressure on MS to actually FIX the problem, but they will also be exposed to the reasons WHY the MS product was vulnerable in the first place.

    Some of them might even suggest ways of improving the situation. But that's in a perfect world, and this world is far from perfect.

    Just telling people "There's an exploit in IIS that allows malicious intruders to use your system(s) to infect others, install a backdoor, and potentially use your system(s) for other purposes" isn't enough. I know as a system administrator, I'd want to know what port the backdoor was put into, so I could secure it at the firewall. I'd want to know how the exploit was executed, so I could potentially filter out the infection requests. I'd want to know exactly WHAT was making my system insecure, and where, so that in the absence of an official fix, I could work my own fixes, to secure my own system(s) against known intrusions.
    • Microsoft already fixed the problem when eEye released their press release.

      So exactly how did this "pressure" help?

      The only "pressure" was on admins who learned that they should read these security bulletins and actually apply patches.

      Furthermore, I don't think you understand what Full Disclosure really means, verus Responsible Disclosure.

      Nobody is saying that they won't release info telling you what piece is broken, what port the information is coming in, or some sort of tag identifying the issue. (For instance the query string clearly showing up in everybody's web logs)

      They're also going to tell you that it's the index ISAPI filter, and you know you are vulnerable because you have the .ida and .idq mappings on your web site, etc.

      What they don't need to do is give out a detailed description of how you would write Assembler to take advantage of the hole.

      And lastly, what exactly do you think "Security through Obscurity" actually means?
      • Microsoft already fixed the problem when eEye released their press release.

        So exactly how did this "pressure" help?


        In this specific instance not at all - as there was already a patch to fix the problem - but what if this had been an unpatched vulnerability? *That* is where the pressure *can* help.

        I was arguing a generic point, while you're dealing with a single specific instance =)

        Furthermore, I don't think you understand what Full Disclosure really means, verus Responsible Disclosure.

        Full Disclosure, IMHO is making all the details surrounding your findings available to the public (in this case, "the public" would probably only consist of sysadmins smart enough to read security bulletins, as Joe Consumer probably wouldn't know the right sites to visit, but the point is that if he did, he could read about it.).

        Now, this "Responsible Disclosure" *seems* to mean not telling "the public" ANYTHING about the exploit - even the fact that it exists - and only telling the manufacturer about it. Now, admittedly, in a proprietary piece of software, the only fixes *internal* to that program could possibly come from the manufacturer, but when dealing with networking issues, there are other steps that can be taken by competant sysadmins. Under the "Responsible Disclosure" policy, these sysadmins wouldn't be able to protect their systems until the manufacturer decided to release a patch. Often, by that time it's far too late.

        Of course, I could be wrong. =)

        Nobody is saying that they won't release info telling you what piece is broken, what port the information is coming in, or some sort of tag identifying the issue. (For instance the query string clearly showing up in everybody's web logs)

        They're also going to tell you that it's the index ISAPI filter, and you know you are vulnerable because you have the .ida and .idq mappings on your web site, etc.


        Good. That's the sort of information we need in order to take action to protect ourselves. Where the problem lies. What the earmarks of it are. What issues it raises for your network's security. How to combat it.

        What they don't need to do is give out a detailed description of how you would write Assembler to take advantage of the hole.

        I consider this to be the proof of the theorem, myself. In order to prove the vulnerability exists, you need to show how to exploit it -- if an exploit exists, dissecting it's workings can provide a solution. Sort of like the process of finding an antivenom for a new strain of poisonous snake generally starts with a sample of the venom itself.

        Know thy Enemy. ;P

        And lastly, what exactly do you think "Security through Obscurity" actually means?

        Basically "If we don't publicize the details of a possible exploit, noone will exploit it" -- collary to not disclosing a found vulnerability to the public, instead keeping it a secret, told only to the manufacturer.

        The simple act of keeping the workings of an exploit a secret won't stop "blackhats" from using it. People seem to forget that "they" have an information network too -- one that includes code, examples, and no restrictions on who has access to them. In order to keep the damage minimal, ours needs to be as good, if not better. Throwing up walls and saying "You can't know about this" doesn't help anyone -- sure, it may prevent a few kiddies from finding what they need to make an exploit on "whitehat" security sites -- but it won't prevent them from picking it up at easily accessible "underground" sites - and most of all, it won't stop them from using or deploying worms or viruses - they're going to do it anyway.

        Keeping the details of how an exploit works a secret only hurts those who would use that information to protect themselves.
        • No. Responsible Disclosure means the full details are only shared amongst the vendors and security experts.

          That does not mean no details will be provided to the public. Certainly the vendor will release a patch and a bulletin telling the public what is affected, how to tell if they are impacted, and how to fix or patch. This conversation may very well contain exploit code. But part of the key is that the conversation won't just be limited to the bug finder and the vendor, but actually shared with a variety of peers in the security community. Peer review will result in better understanding of the issue and more intelligent press releases to the public.

          It's just the real specific details that are left out of the public disclosure. But if you are a super smart mega hacker, from the limited disclosure you're going to be able to figure out anyway. But why make it easy for the script kiddies?

          It's not "security through obscurity" at all.

    • "I know as a system administrator, I'd want to know what port the backdoor was put into, so I could secure it at the firewall. I'd want to know how the exploit was executed, so I could potentially filter out the infection requests. I'd want to know exactly WHAT was making my system insecure, and where, so that in the absence of an official fix, I could work my own fixes, to secure my own system(s) against known intrusions."

      You're using a Linux/FreeBSD mentality and trying to imbue it onto Microsoft administration. It doesn't work that way.

      I administer both types of machines, and the philosophy behind even the administration is totally different. The Microsoft approach does NOT require knowing the source, or having a full exploit in your hands, to be a practical administrator. It's all about ease-of-use, allowing the admin to function as an end-user of sorts.

      Would I ask for port information and other bits about the exploit? Sure I would. But most MS administrators would not.

  • I think that if people are willing to accept the limited amount of info about the inner workings of commercial software that companies like Microsoft provide, they should be happy with nothing more than an explanation of "here is the latest update, apply it". Even ISC.org is getting to a point where they will limit the info they give out immediately on fixes for BIND because the admins couldn't get the fixes in place in time before the hackers got word and started breaking into numerous sites. IIS administration would devolve into applying the patch of the week...oh, never mind it already has.

    If you don't demand more from their software, you deserve less. In all honesty, there are heaps of people who compile and install Apache and barely know what the code is really up to, making them no better than script kiddies. However, if you so choose to see what the inner workings of Apache are, it's no trade secret. The difference between the blind Microsoft installers and the blind Apache installers is that the Apache developers swarmed to fix the trouble when they got hit, where Microsoft wonders how many people noticed the bug to see if it's worth their royal while to fix it. True, the patch for this problem had been out a while, but many complained that it did not do the trick for them, and more is surely on the way.

  • by G Neric ( 176742 ) on Thursday August 16, 2001 @10:51AM (#2115309)
    The letter makes an argument based on the supposed "full cost" of the full disclosure of security holes. Unfortunately, this argument is hopelessly flawed from an economics perspective because he did not consider the cost that diminished awareness puts on the future.

    It was not till exploits started being written, and even further after there were some major epidemics, that unix admins and programmers tightened up on security. (sendmail? ftpd? bind? sheesh!)

    Did Microsoft learn anything from these previous incidents? Yes, but very little. The greater cost of cleaning up Microsoft software reflects the widespread use of it by relatively clueless lusrs. But this widespread use means that "total cost" gets spread across a bunch more people. Was the disruption disproportionate to the laxity that Windows sys admins and Microsoft programmers had showed immediately prior to this incident? No, it was a bargain. And trust me, the cost of Code Red cleanup is a bigger bargain if it means that people will wake up to the importance of security in the future.

    Right now in the wake of the dot-com collapse, an outbreak of Code Red means we can't get our email for a day or something. But when we are truly dependent on the net in the future it is imperative that we are ready and history shows us that we would not be without a few swift kicks in the pants like this. As I mentioned in another post, I don't think Richard Smith is sophisticated enough to be a leader on issues like this.

  • ...as the guy who got his name all over the news because he claimed to have helped the authorities track down the guy who wrote Melissa? His big discovery was that you can open the Word document in a hex editor and see the document GUID.
  • by Anonymous Coward
    You can find new security holes in NT automatically. Microsoft has tried to hush this up. The famous NTcrash [fx2.co.uk] program is an illustration of this. Microsoft leaned hard on the originators of that program, who were non-Microsoft NT internals experts, to suppress it. That program, which makes random system calls, demonstrates that NT 4 security was inferior to NT 3.51 security, and that NT4 had bad code borrowed from Windows 95 in the kernel. Microsoft didn't like that. It's very hard to find a copy of that program on the web. Watch that link disappear.

    NTCrash does more than make random calls; it stores what it's doing before it tries it, and after the reboot, avoids doing that again. So after a while, and many crashes, you accumulate a log of new vulnerabilities.

    There are later variations on that theme which find more subtle holes. Rather than just making random calls, it's more useful to permute valid calls slightly. That's been tried successfully.

    The classic paper on this subject is The Tao of Windows Buffer Overflow [cultdeadcow.com], from the Cult of the Dead Cow.

    Considering that all this was known five years ago, there's no excuse for Microsoft products having any buffer overflow vulnerabilities. This falls between "gross negligence" and "reckless endangerment". Where's the plaintiff's bar when you need them?

  • This is absurd (Score:5, Insightful)

    by schon ( 31600 ) on Thursday August 16, 2001 @10:26AM (#2118878)
    Why is slashdot giving wind to this troll?

    This guy is woefully misinformed, and completely stupid.

    Anyone who is subscribed to Bugtraq knows that eEye has already responded to him, and the bottom line is that Code Red is not based (in any small part) on the eEye security bulletin.

    This proves that the guy is completely wrong.. because Code Red wasn't based on the eEye bulletin, that means that the "black hats" already knew about the vulnerability.

    Like scientists, security professionals rely on an existing body of work. If the only people who had access to this body was the vendors, it would slow down the white hats, making the entire situation worse, not better.

    Please do not feed this troll.
    • Re:This is absurd (Score:2, Interesting)

      by Remote ( 140616 )
      My hunch is that, given the fall in the overall level of opinions posted in Slashdot over the last months, Hemos is dilligently bringing up such absurd questions so as to educate people who, although misinformed, can spot good advice when exposed to it. :)
    • Re:This is absurd (Score:2, Insightful)

      by numberVI ( 94451 )
      True. Most arguments against Full Disclosure blithly ignore the fact that zero day sploits exist and get passed around the "underground", sometimes MONTHS before you get an advisory from CERT. Also, you hear about it on BUGTRAQ days or weeks before CERT responds.

      Crackers and Kiddiez have more than just full disclosure in their community. Often they get entire rootkits!!!! Whitehats get advisorys that are often late, vague, and incomplete.

      I'd say that gives the black hats have a distinct advantage. They are nimble and unencumbered by the demands of PHBs, laws, morales, and silly dress codes.

      Full Disclosure is the one thing we *could* have that they already have, but it's usually under attack from the well intentioned and misguided eliteists who feel that "the unwashed masses" can't benefit from Full Disclosure. Then again, the road to hell is paved with good intentions.

    • "Why is slashdot giving wind to this troll?" CmdrTaco runs the site, and he does the same thing all the time. :) It gets the conversation going in a way that is unattainable when a bunch of nerds (and no layman) provide an alternate argument.

      And don't even try to make connections between CmdrTaco and "wind". :)

  • What? (Score:4, Insightful)

    by quartz ( 64169 ) <shadowman@mylaptop.com> on Thursday August 16, 2001 @10:23AM (#2120770) Homepage
    I thought the patch was already available when Code Red started spreading. Sorry, but whyle delaying the full disclosure can slow down the virus writers a little, it's not going to make lazy sysadmins apply patches to their servers any faster.
  • by Greyfox ( 87712 ) on Thursday August 16, 2001 @03:13PM (#2125348) Homepage Journal
    My concern is not code red or the Moris worm or any other piece of malware that's been discovered. My concern are the ones who haven't. The designers of code red didn't make any effort to make sure their worm was stealthy. Had they done so, they could have had their code quietly running inside pretty much everywhere without anyone ever noticing. Just from a corporate espionage standpoint, that's a scarey thought. Never mind all the other nasty stuff you could do with full control of computers everywhere. And code red wasn't stopped long by firewalls, at least where I work. All it takes is for one user to get compromised and then dial in to the internal network without rebooting. At least one obviously did.

    How does this relate to the topic, exactly? Well full disclosure makes it easier to find, prevent or fix the ones who might be hiding. If application foo has a hole and a few people know about it, it's easier for one of them to quietly exploit it, possibly for years. And the quiet exploits are much, much more dangerous than the ones that are quickly discovered.

    I expect that companies will start lobbying Congress to make security disclosure illegal. Those will be the companies you want to avoid if you want your company's network to be secure.

    • full disclosure makes it easier to find, prevent or fix the ones who might be hiding

      ...compared to total non-disclosure. However, by considering only those two possibilities you fall prey to the fallacy of the excluded middle. Partial disclosure - e.g. of risks and countermeasures but not of mechanics, or preferentially to trustworthy security professionals - can provide the same benefits vis a vis total non-disclosure without total disclosure's problem of teaching the bad guys how to exploit a vulnerability.

  • Personally if I found a security bug I'd do this......

    1. If it was free software I would make a patch for it, then mail the patch and a full description of the problem to the makers of the product and several other security minded websites so that people would know that they had a problem and have the patch to fix it.

    2. If it was closed source propriatory software I would give the company a 5 day head start (even if it was MS) by telling them privately first, then if they hadn't posted a patch yet (which IMO would be irresponsible or inept of them) I would post full details of the problem (and if it was MS maybe even code to exploit it) to force the company to move it's ass and distribute a patch.

    Would I keep it a secret? NO. Why not? Simply because if I did, anyone else who came acrossed it could use it to their own advantage and hurt others. Why would I not simply inform the creator of proprietory software of the flaw and no pubicly post it afterwards? Because they can't be trusted. (not period) I have a feeling that they would simply ignore the problem until the average person became aware of it simply because they wouldn't want to waste the money correcting something that isn't well know. They would wait for the next BIG update to fix it which would leave the people ignorant of the problem vulnerable. By giving them 5 days I give them a chance to do the right thing with no pressure, then I release it to twist their arm into doing the right thing, if they can't figure out how to fix it.....tough shit they should've made it open if they can't handle their own software. If they release a patch before 5 days will I still release info and (possibly) code to exploit it? YES!!! Why? Because it is a sad fact that people will not install the patch until it is well known that they need it (take code red for example). By releasing an exploit sure I open the people who haven't patched yet up to abuse, but I also make the problem more known and force lazy sysadmins to get off their asses and get patched, in the end everyone is patched (except the stupid) and less damage has been done than would have before.

    My philosophy is "If I can find it, others can find it" and I feel that letting the few people that find it run rampant with it is irresponsible and will cause more damge than bringing attention to the problem which may possibly let a few script kiddies do a small amount of damage rather than let an expert use an unknown method to safely exploit others for years.

    Bottom Line? Security through Obscurity is ridiculous and does not work.
    • This isn't the first Microsoft vulnerability that Eeye has documented, nor the first time they have come under fire for their handling of the release of the advisory and sample exploit code.

      Eeye does give Microsoft advance notice before releasing details, but the minimal advance notice they give isn't sufficient for Microsoft to get moving on a fix, much less for thousands of admins to patch hundreds of thousands of servers.

      But who is ultimately at fault here? Eeye for releasing the information, or the black hat for writing the worm, or Microsoft for releasing buggy code in the first place?

  • Microsoft deserves a head start, but that's it. If holes are kept to MS and the people that find them, it's only a matter of time before a Black Hat figures it out. Releasing the details puts pressure on the companies to fix the damn thing. Without that pressure, MS would feel little to no pressure to fix what they screwed up in the first place. You wouldn't be doing anyone but MS a favor by withholding this info. The lesson to be lerned is that if you son't want ot lose money from downtime and reboots, don't use Microsoft on your servers. Certainly don't depend on it.
    • "If holes are kept to MS and the people that find them, it's only a matter of time before a Black Hat figures it out."

      Yeah, but how much time? These many arguments that "full disclosure pushes Microsoft along in releasing the fix" have no grounded basis in reality. Besides couldn't it be possible to rile up the media hype necessary WITHOUT giving information as to how the exploit occurs? The only thing the average user needs to know is "it's a security hole, it's bad".

      • Yeah, but how much time?

        The shortest turnaround I've seen is about 12 hours for someone else to figure out a hole that didn't have all the details published initially.

        These many arguments that "full disclosure pushes Microsoft along in releasing the fix" have no grounded basis in reality.

        Sure they do. That is how it happened. Microsoft used to be one of the companies that would hide bugs, slipstream fixes, try to hide details, etc.. this was about 3-4 years ago. They are much better now. Guess why.

        Besides couldn't it be possible to rile up the media hype necessary WITHOUT giving information as to how the exploit occurs?

        Nope. The reporter wouldn't do a story with no details. There is no story without details. And the bug has to be sexy enough, too.

        The only thing the average user needs to know is "it's a security hole, it's bad".

        No, they also need to know if it affects them, and what does "Bad" mean. If they don't believe that it is really bad, average users won't bother.
  • More info (Score:5, Informative)

    by GeorgeH ( 5469 ) on Thursday August 16, 2001 @10:27AM (#2130266) Homepage Journal
    There was an interesting post [politechbot.com] about this on the Politech [politechbot.com] list, which includes a response from Elias Levy [securityfocus.com] (the guy who runs BUGTRAQ).
    • Re:More info (Score:3, Interesting)

      by Salamander ( 33735 )

      In that thread, Richard Smith asks:

      How should third-parties develop countermeasures?
      ...
      How should authors of vulnerability scanners and intrusion detection systems obtain information to produce new signatures?
      ...
      etc.

      By limited disclosure. Yes, Virginia, there is something between sweeping something under the carpet and laying out all the gory details for everyone (including other would-be virus/worm writers) to see. If a security-product vendor had information that would help their colleagues create barriers, signatures, etc., they could share that information with those colleagues - without having to share it with the entire world. They could release enough information publicly to allow one "skilled in the art" to create countermeasures, without providing a step-by-step recipe that even the relatively unskilled could use to create new exploits. There's no need to reveal *everything* to *everyone*.

      So why don't vendors do this? Do they not have faith in their colleagues' discretion? Hmmm. Do they not have faith that their colleagues can develop countermeasures based on partial information faster than black hats can create new exploits? Hmmm again. The *real* reason why they prefer full disclosure is discussed in one of my other posts to this thread.

      Smith goes on to say:

      What it boils down to is this: disclosure of detailed vulnerability information benefits security conscious people, while, in the short them, hurts people that do not keep up with security

      BZZZT! Wrong. The security-conscious people can get that benefit without full disclosure, with less risk to the security-naive rabble. Of course, it's in security companies' interests that security-naive people should get hurt, making them less security-naive and more likely to buy products or services from companies such as the one of which Smith is CTO. I sure am glad that I'm not in a business where making sure people get hurt is part of the business plan.

      • If a security-product vendor had information that would help their colleagues create barriers, signatures, etc., they could share that information with those colleagues - without having to share it with the entire world.

        Which means someone has to keep a list of colleagues and everyone with a vulnerability has to make sure to send to everyone on that list. So either some central authority decides who gets on the list or not, or else anyone can add themselves to the list and get added with little or no verification. The first will lead to the more small-time colleagues being excluded, while the larger will be more or less identical to what already exists.

        So, how does that solve the problem?

        They could release enough information publicly to allow one "skilled in the art" to create countermeasures, without providing a step-by-step recipe that even the relatively unskilled could use to create new exploits.

        Some of the virus/worm authors are quite skilled in the "art". And most script kiddies wouldn't know what to do even with a step-by-step recipe, they rely on others to make point and shoot kits.

        It would cut down somewhat on attacks, but could also slow the response of those trying to fix the problem.

        Of course, it's in security companies' interests that security-naive people should get hurt, making them less security-naive and more likely to buy products or services from companies such as the one of which Smith is CTO. I sure am glad that I'm not in a business where making sure people get hurt is part of the business plan.

        Wow, straw man!

        Do you really advocate dumbing down everyone because of all the clueless W2K and RedHat users who never install any security updates? Because they won't install security updates, no one gets to know about the vulnerability well enough to determine if they are affected. Which could lead to social engineering attacks where a malicious individual releases a limited-disclosure bulletin that says to take measures that will actually increase vulnerability, and no one can verify if it works or not (similar attacks have already been attempted, this isn't completely hypothetical).

        And, of course, in your entire post you mention nowhere that good practice is to alert the vendor before releasing to the public whenever possible. Instead, you imply that "full disclosure" doesn't give the vendor any chance to close the security holes.

        • Which means someone has to keep a list of colleagues and everyone with a vulnerability has to make sure to send to everyone on that list.

          For someone who just accused me of constructing strawmen, you were pretty quick to whip out one of your own. Ask the Gnutella or FreeNet folks whether distribution of information requires a central directory. Ask the PGP folks whether trust requires a central authority. More decentralized means of distribution can (and do) work rather well for security information.

          Do you really advocate dumbing down everyone because of all the clueless W2K and RedHat users who never install any security updates?

          Wow, strawman #2 already. No, I do not advocate that at all. In fact, my point in one of my posts to this thread is that those ignorant W2K/RedHat users won't apply the patches anyway, even with full disclosure.

          And, of course, in your entire post you mention nowhere that good practice is to alert the vendor before releasing to the public whenever possible.

          Perhaps because, just five minutes prior to the post you saw, I had congratulated someone else in another post for reiterating that very point. I don't like to repeat myself, and generally only do so when someone I'm talking to seems particularly thick.

          Instead, you imply that "full disclosure" doesn't give the vendor any chance to close the security holes.

          Yes, folks, we have a straw-man hat trick! No, there was no such implication in my post; you made that up.

  • security through obscurity is a fallacy, but it can delay the inevitable..

    seriously.. maybe a stepped grace-period would be an idea?

    step 1: Bug is found, creator is notified
    step 2: 2 weeks later. if bug is fixed, go to step 3. disclose existence of a bug, not much details yet
    step 3: full disclosure

    just shooting off the hip here...

    //rdj
    • That is pretty much the algorithm many people in the industry currently follow. If they discover something new, they inform the vendor and wait impatiently. If the vendor doesn't respond, or responds with something lame about not considering the vulnerability exploitable, the vulnerability is reported to the security community.

      I have read several emails from people who promise full disclosure shortly, but who are giving the vendor a chance to review their code because they acknowledged the problem.

  • What OS in it's right mind allows code in the Stack Segment to be executed? If it's stack, it's obviously not a valid instruction, and should have been trapped.

    If the system is known to have a problem with buffer overflows, why not test it yourself before someone else exploits the hole? Why not test ALL of the software this way?

    This, "the most expensive computer virus in the history of the Internet" is a mere wake up call. Someone, somewhere, is going to learn from this, and other sources, and do something nastier and far more damaging. It will be more subtle, harder to detect, and will slowly take over all versions of windows, or it might be a blinding flash, splitting up the work to take over everything, hooking in multiple places, distributing its attack methods to make it harder to get a list of ALL of it's methods.

    Things are still very insecure, we're all going to get hacked, it's just a question of when, how we respond, and what we learn, in the end.

    I hope everyone has a nice, complete, MD5 hash/Binary compare checked backup of their files.

    --Mike--

    • What OS in it's right mind allows code in the Stack Segment to be executed? If it's stack, it's obviously not a valid instruction, and should have been trapped.
      Executable stack can be used for trampolines and thunking. True, it *is* something of a kluge, but it is legitimate.
  • Full disclosure is the only answer.

    When the vulnerability applies to Windows:
    • publish the vulnerability along with at least one example exploit
    • write a paper letter to the vendor telling them of the website which describes the vulnerability. (Include sufficient postage on letter)
    When the vulnerability applies to Linux, you still have full disclosure.
    • Tell the developers
    • They fix the bug
    • Full source code for the fix is released into the public -- which constitutes full disclosure.
    • Comments in the source explain the vulnerability
    Seems simple enough.
  • Well Put, But. (Score:2, Informative)

    by ers81239 ( 94163 )
    The letter is very well written and I agree with the author. My only critiscsm is that Microsoft does not have a good system for accepting such advisories. Its always been my experience that you have to pay Microsoft [microsoft.com] to listen to your complaints/concerns about their software.

    Maybe eEye tried to tell them but they didn't listen?
    • You're a little off in your "facts". Despite also having an email address for security warnings, Microsoft accepts full bug reports for free. You just have to mention it in your phone call, and they patch you through immediately to the appropriate party.

      They also accept general concerns/suggestions/complaints through email (XP, for example, has an email address for concerns and suggestions).

    • It worked for me when I had a notification for them. Of course, they only sent me back a snide comment saying that it wasn't their problem. After the story broke on wired.com and NYTimes, they finally responded. "We're such fucking idiots" is what the first guy said to me on the phone, wondering why some of his underlings hadn't given more of an effort.

      Oh, and it was also a Netscape problem, and they ignored me as well.

  • Blame Microsoft for having released software with huge security flaws.
    Full disclosure is definitely a good thing. Not only it pressures software vendors to release a patch, but it also help other programmers to understand good programming practices.
    And *this* help every application on the security road.
    Today, more and more software is designed with security in mind. 10 years ago, nobody was careful about this.
    How could programmers know how to code secure software without review full disclosures of other software ?
    Yes, there are "secure programming" mailing-lists, but they'd be abstract and far from the reality without any knowledge of real cases.

  • by tycage ( 96002 ) <tycage@gmail.com> on Thursday August 16, 2001 @10:16AM (#2132805) Homepage

    My only problem with this is that Microsoft lacks the motivation to fix the hole if no one else knows what it is.

    Sure, people can point out there "some hole" exists. But unless the details are made public, Microsoft (or whoever)isn't motivated to fix it, and no one can check up after them to see if it has been fixed. We have to take their word for it. That would make me very nervous.

    I wouldn't be against a "grace period" where the vendor is told of the hole so that a patch can be ready when it is announced, but it would need a time limit on it to keep it from being delayed forever.

    --Ty

    • I'm a believer of a grace period. However, my only hesitation, is this: say you give a big vendor, perhaps Microsoft, notice that there is a serious hole in one of their products, and that you will give them a fixed amount of time to do something until you post full disclosure to Bugtraq.

      Microsoft has enough lawyers that seem to have plenty of free time on their hands, that my bet would be that they'd try to shut you down, and prevent you from making the promised disclosure.

      These tactics scare the bejesus outta me, as far as their implications for security information distribution between professionals.

      The crackers and virii folk have, or will have, the information sooner than the admins. Anything that further delays getting me information about potential threats to my network, I regard as irresponsible.
    • So apparently you believe when eEye released their press release discussing this problem, there was no fix available from Microsoft?

      Do you have proof for this claim? Or are you just talking out of your ass?

      That "grace period" is how it's done today. Microsoft released their security bulletin on the same day as eEye released their disclosure information. Because they had been working together on the issue for quite some time beforehand.

      • I don't think I mentioned eEye's behavior at all. My response was to the e-mail referenced in the posting. I didn't mention how anyone does it now, only how I think it should be handled. I don' really have any knowledge of "how it's done today." Perhaps you can let us know what you are baseing that statment on so the rest of us will know as well.

        --Ty

    • by dudle ( 93939 ) on Thursday August 16, 2001 @10:59AM (#2135868) Homepage
      My only problem with this is that Microsoft lacks the motivation to fix the hole if no one else knows what it is.

      I don't generally defend MS but what you said is simply not true. If you compare MS to other companies, I think they have a descent response time and take ALL issues quite seriously. Yes their product sucks but as far as I know (and I read Bugtraq religiously), they are usually not too far behind.

      The first one who replies to my post by mentionning Code Red doesn't know what he is talking about. MS released a patch for Code Red weeks before the worm spread like wild fire. Get your story straight.

      Every now and then there is this discussion on bugtraq about full disclosure. It started last week by someone mentionning the cost of the worm and its variants. How eEye could have done better and stuff

      Let me make myself clear: Full disclosure is mandatory!. Without it, we are all screwed.

      Please flame accordingly.

    • Agreed. Today Microsoft released a cumulative fix [microsoft.com] for IIS 4 and IIS 5. They fix five previously unknown bugs, as well as some known ones, including the Code Red hole. Anyone want to bet that programmers at Microsoft are the *only* people who knew about these holes? Perhaps they were tipped off by some White Hat hacker, but more likely they discovered the hole from an intrusion attempt.
      • well.. they acknowledge people who found the bugs.

        Acknowledgments
        Microsoft thanks the following people for working with us to protect customers:

        John Waters of Deloitte and Touche for reporting the MIME type denial of service vulnerability.
        The NSFocus Security Team (http://www.nsfocus.com) for reporting the SSI privilege elevation vulnerability.
        Oded Horovitz of Entercept(TM) Security Technologies (http://www.entercept.com) for reporting the system file listing privilege elevation vulnerability.
    • Full disclosure is a good thing - in theory. I'm all for releasing the details of an exploit, but in a partial, then full manner. I'm not sure if this is what happens in all cases, but many white-hats operate in the following manner:

      • Make the vendor aware of the full extent of the problem, to give them the information required to develop and test a patch.
      • After a grace period, the full exploit should be made public, partially to force/persuade the vendor to ensure that the patch is ready in due course, and also to educate the wider community.

      Unfortunately the one thing that this system cannot do is convince lazy or inept sysadmins/users to patch their systems.

      I can't comprehend the mentality that afflicts these people.

      "we've discovered an easy way for the entire world to break into your bank account and take all your money - but if you call your friendly bank now, they'll give you an gold star to stick on your credit card that will stop all this."

      Watch for the thousands of phone calls that the bank would get....

      Compare with:

      "Hi it's $FAVOURITE_SOFTWARE_VENDOR - your server will be cracked unless you install this patch, potentially costing you your customers trust, their custom and your money as your servers fold, not to mention the untold wrath of other sysadmins whos networks your cracked systems have been attacking..."

      Barely half the people out there bother.

      Yes, it might be possible to say that its the vendor's fault that the software isn't secure - after all all software should be perfect (yeap, I know Microsoft are really taking the piss on this one - closed source and poor security - and you have to pay for the honour to be a security hazard). It's also possible to say that it's the cracker or virus/worm writers fault for attacking systems. It's especially easy to say that it's the sysadmin (if they deserve the title) that can't patch a system up.

      However, it's more of a combination of all three, and probably more reasons than I can think of. Everyone needs to pull their fingers out of their behind, and do their own part.

      The problems in the system need to ironed out before it hits the shelves, good coding, code audits and sensible defaults to name but a few things the vendor needs to do - not just for security, but stability and maintainability. The admin/users need to learn their own systems at least enough to not screw everyone over if something does go wrong, and prevent things going wrong in the first place. The virus writers and crackers?

      Hell, I can't think of everything.

      • by WNight ( 23683 ) on Thursday August 16, 2001 @06:49PM (#2152426) Homepage
        Actually, patching your server is one of the worst things you can do, if you aren't careful.

        It depends on the OS, the severity, the size of the fix, and how easy it is to block in another way.

        For an open source OS, with a simple fix, where you can look at it and be reasonably sure the patch is secure, go for it if the bug is serious.

        For a closed-source OS, or a really complex patch, don't apply it until you've seen reports from people who do (give it a month or two) unless it's a huge bug and you can't block it with another method.

        For example, some bugs would be port 139 overflows. Don't just patch Windows, firewall port 139 from the outside world.

        Another example, Code Red... Use a filtering proxy/firewall to dump any port-80 traffic that requests "default.ida"

        Keep in mind that patches aren't tested very well, simply because of the urgency of releasing them. I wouldn't trust an alpha-kernel on my servers, why would I try a webserver with an alpha patch?

        This is especially important if you're working with a Microsoft system. They'd got a lot of history of releasing buggy service packs that can't be properly rolled back, etc.

        THis is why full-disclosure is *essential*. Compotent admins can implement their own fixes while they wait for something official (and tested) to be developed.

        Imagine if Code Red was describes only as a buffer overflow... It wouldn't be possible to protect yourself from it.
  • The guy who signed that letter, Richard M. Smith CTO, Privacy Foundation, was on the radio in Boston yesterday (you can probably find it in the archives of Here and Now at http://wbur.org/ I'm not linking because I don't live in Boston and I don't want my feed slashdotted :) The news story was about a court ruling in Massachusetts where the automatic toll "scanner" information was turned over to the court for some Big Brother like law enforcement purposes. Courts approved the info transfer even though the law creating the toll system explicitly said it would be "private".

    So, anyway, Richard Smith, this supposed privacy guru, it turns out happily uses this very same toll system! Despite its obvious privacy problems, he can't be bothered to wait a few seconds a day to pay cash. Not only that, but he shows up in public forums letting us all know how he feels.

    Now, on a security list (he is CTO after all, woo woo!) he is now praising Microsoft's security policies...?

    Ya know, Stallman can be incredibly annoying. But, when it comes to a public figure like this, his "purity" is somewhat reassuring. I think Richard Smith is probably a nice and smart guy, and he's entitled to his opinion... but CTO of the Privacy Foundation is also how he's "entitled" and if you ask me, he loses credibility all the time. So what? Well, he dimishes the causes of privacy and security as he sinks.

  • by twivel ( 89696 ) on Thursday August 16, 2001 @10:35AM (#2134722)
    First of all... Full disclosure did not facilitate the creation of this worm. It was based off of a previous worm that was available long before the details of the exploit for Code Red was made available. This particular worm did not use the research form eEye's exploit, it's obvious from the way this worm exploits the vulnerability.

    Vendors all around view a vulnerability that has been publically exposed as a much higher priority than those that have not been exposed. Over and over again, history has shown that a vendor will try to cover up a vulnerability if it is not exposed, to avoid bad publicity. (No, this is not specific to Microsoft, all vendors hate bad publicity). If an exploit is publically available for a particular vulnerability, it also changes the method in which the vendor advertises the patch, thus increasing the people who know about it and install the patch.

    Full disclosure provides many useful functions, including the ability to test for vulnerabilities in their own systems. It gives them the abliity to verify that the system has been properly secured after a work-around has been implemented.

    Partial disclosure, which is often suggested, is no different htan full disclosure, except it may give the admins a false sense of security. With partial disclosure, the existence of a bug is disclosed to the public - but the details are not. Sad enough, once the cat is out of the bag, it's only a day or two before someone else can figure out the exploit. Once the vendor releases a patch, it is trivial to do binary diffs on the provided updates and figure out the details of creating the exploit. In fact there are tools that help to automate this already in existence today.

    The sad thing about code red is this: Patches have been available for quit a while now. Yet systems are still getting hit. The widespread affect of Code Red is the ONLY thing that will get the admins who never patch their systems to potentially pay attention to whats going on.

    Full Disclosure is not the problem. If one person has found the vulnerabilities, there are generally more who have found them and are actively exploiting it already. To think otherwise is to seriously underestimate the cracker community.

    --
    Twivel

  • ...instead of a tap on the shoulder. Some companies need some "convincing" to make the necessary changes in a timely manner. Microsoft is definitely one those companies.
    • ...instead of a tap on the shoulder. Some companies need some "convincing" to make the necessary changes in a timely manner. Microsoft is definitely one those companies.

      Unrepentant bullshit. Microsoft is very good at getting fixes out. Some bug-hunters expect a 2 hour turnaround time on their reports before "forcing MS to fix this by going public". Eeye even says that MS was quick in putting out a patch to fix the hole. The vast majority of bug hunters that actually take the time to work WITH Microsoft say that MS is quick in getting patches developed and in the hands of administrators, where they aren't applied (but that's a different story). Where's the sledgehammer? Can you support your claim with any evidence of any kind, or is this merely Yet Another Case of Uninformed Microsoft Bashing?
    • A sledgehammer isn't necessary if the vulnerability is relatively difficult to find. There WAS no time limit, until eEye decided to instill one by releasing the code.

      I've always been a firm believer against virus-watchers who release full exploits to the general public. It simply isn't necessary. The same results (warning Microsoft) could have been done without causing such a hyped panic.

      It's akin to not only delivering a news story on a serial thief who's robbed many homes, but giving full and agonizing details on how he broke in to the general public (and in the process, other criminals). There is NO ONE who can argue that the only people who *need* that information, for the safety of others, is the police.

      • I've always been a firm believer against virus-watchers who release full exploits to the general public. It simply isn't necessary. The same results (warning Microsoft) could have been done without causing such a hyped panic.

        If the only action taken is to warn Microsoft, someone will discover the problem elsewhere, eventually. In the meantime, Microsoft is unlikely to take the complaint seriously - after all, the damage is only "theoretical", right? Now, it's a fact that there are a lot of inattentive, lazy sysadmins out there, many of whom are running IIS, and that's why they haven't all applied the patch yet - but at least with this in the news, it's harder for them to avoid it. How many would bother to apply the patch if there weren't any obvious benefit to doing so? Many might choose not to disturb a working installation.

        Personally, I think that the only software that can ever hope to be secure in the real world is built like a tank. Use a language or library that makes it impossible to have buffer overflows; assign permissions to everything and never give out more than you need to; etc. But in an environment where exploits are only theoretical and only announced to the entity responsible for fixing them, you have to admit that companies like Microsoft will be very slow to fix them.

        • The problem is not "lazy sysadmins". I'll bet most of these machines didn't even have one. Most sysadmins do nothing more than think about the patches they'd like to install to prevent having to clean up messes later. And another impediment to good administration is managerial environments that don't encourage pro-active administration, i.e. they don't trust the admins to do their own job. Where this kind of trust comes from, I don't know. The lazy admin is a myth. Admins aren't paid to be lazy, and the lazy one's wouldn't last. Breakins occur to sites whose admins hands are tied, or to sites that in effect have no admin ("gee whiz I'd like a webserver and this microsoft one seems easy...ooh neato it's up!"). Blaming admins for breakin's is like blaming car problems on auto mechanics. "Yes all those lazy mechanics out there didn't wan't to tune up their cars...they'd rather play quake". Absurd. Most mechanical car failures are probably due to owner neglect, not mechanic neglect (other than what is generated by not seeing one). Dentists don't cause cavities. Admins don't cause breakins. Moreover security isn't typically a function that is even *expected* of most admin jobs I've seen. I've practically begged every outfit I've worked for to develop a security policy, check passwords, etc. and it's a small minority that cares- "there are other irons in the fire". Security is a hard to quantify benefit. If you keep the alligators away, who is to know how bad they are? If the alligators are discrete as well, all the more a problem. -florkle
        • "Microsoft is unlikely to take the complaint seriously - after all, the damage is only "theoretical", right?"

          I've heard this argument an awful lot, and no one in the open source community (who seems to want to use this argument) has ever been able to bring any factual instances to light. Has there ever really been a Microsoft vulnerability reported to MS where the company replied "That damage is only theoretical. We don't feel obliged to fix it." I mean, a real world story.

          And keep in mind, MS probably receives dozens of fake security exploits a day by open source/hacking zealots (and I include myself wholeheartedly in that group). You can only expend so much money on determining what exploits are "real".

          • I've heard this argument an awful lot, and no one in the open source community (who seems to want to use this argument) has ever been able to bring any factual instances to light. Has there ever really been a Microsoft vulnerability reported to MS where the company replied "That damage is only theoretical. We don't feel obliged to fix it." I mean, a real world story.

            I admit I don't personally have such a story. However, you have to admit that this result is unlikely - why would Microsoft announce that they're not going to fix the problem? Much more likely is just silently ignoring it.

            And keep in mind, MS probably receives dozens of fake security exploits a day by open source/hacking zealots (and I include myself wholeheartedly in that group). You can only expend so much money on determining what exploits are "real".

            Doesn't this go against the other point you were making, though? In a way, the writing of exploits prevents Microsoft's having to spend so much money on determining which ones are "real" - if there's an exploit publicly available, it's definitely real.

          • Microsoft doesn't seem to do that anymore, because they've adapted to the reality of full disclosure. However, many software vendors still do sweep problems under the rug - in fact this seems to be the 'default setting' for a software vendor.
            If you read BugTraq, it seems like half the vulnerability postings say "I notified $VENDOR four weeks ago, and they failed to respond/said it's a feature/said it's not worth fixing.
            One vendor revved their firmware three times while ignoring a huge vulnerability that had been reported to them. Finally the researcher posted it to BugTraq. And this is not exceptional - I remember it only because I read it recently.
            If you look at Microsoft's vulnerability announcements, you can see the evolution. They used to put a little PR spin on the vulnerability, claiming it would take 'special software' to exploit it, or otherwise implying that an exploit was unlikely. They seem to be reducing these attempts at damage control, since they realize that everyone realizes that if it can be done, it can be automated.
            I mention these little PR driblets because they demonstrate that the advisories are issued under duress - Microsoft feels that it would be better not to issue an advisory. And they acknowledge the duress honestly - part of adapting to the full disclosure world - by giving credit to the researcher who reported the vulnerability. That's how they reward people for telling them first.
          • I've heard this argument an awful lot, and no one in the open source community (who seems to want to use this argument) has ever been able to bring any factual instances to light. Has there ever really been a Microsoft vulnerability reported to MS where the company replied "That damage is only theoretical. We don't feel obliged to fix it." I mean, a real world story.

            Sure - Melissa et al. For years people have been bitching about the security of executing email attachments in Microsoft Outlook, and the amount of control over the machine that executed attachments have (no sandbox). The initial response from Microsoft was "that's a feature, not a bug", and even though there have been some steps in the right direction, the problem still exists and still breaks out into an epidemic from time-to-time.

            I suppose that you could argue that clicking on attachments is more social engineering, but on the other hand Microsoft products are specifically sold as "easy to use". If it's so easy to use, why do users still fall prey to simplistic social attacks?

            I'll admit that full disclosure hasn't even helped in this case, though - Microsoft and the whole world has known about the problem for a long time and it's still with us.

            And keep in mind, MS probably receives dozens of fake security exploits a day by open source/hacking zealots (and I include myself wholeheartedly in that group). You can only expend so much money on determining what exploits are "real".

            My heart bleeds for them - "we have so many security exploits that we can't even take the time to figure out which of them are real". Hint: do it right the first time, and then hire yourself some crackers to do this poking and prodding for holes that until now has been reserved mostly for the black hat community. I don't have much pity for Microsoft's inability to fix their security problems - they dug themselves into that hole by valuing time-to-market, embracing-and-extending, and pseudo-usability more highly than security and actual functionality.

            • "My heart bleeds for them - "we have so many security exploits that we can't even take the time to figure out which of them are real"."

              You've completely missed the point. The problem is not that "there are too many security exploits" but that many are simply tomfoolery brought about by the community. Do you have any idea how many security exploits MS receives that have no basis in reality, but are only there to get their goat? And it's not like the security people should have to pay for the mistakes of the company (monopolistic practices, etc.)

      • Well in this case I agree that maybe a total disclosure from eEye was a bit of a conflict of interests, here is an analogy to illuminate my point. Suppose that ADT (or some other home/commercial security company) had a flaw in their security systems which actually made it easier for you to break in to a home/business. It would be one thing for a news reporter who is doing an informational story warning people that a flaw exists, it is quite another thing for a 3rd party company who sells add ons for the security companies systems to give potential wrong doers step by step instructions on how to exploit this flaw. This is basically what eEye is doing, if you notice, on the same URL where one can find the free Code red detection tool there is a link to their secureIIS web product. Basically they have released damaging information to stimulate sales of a product they produce...not exactly ethical.
      • There WAS no time limit, until eEye decided to instill one by releasing the code.

        They did not release any code.
      • I'm sorry, but I would really prefer to have the information that a certain lock installed on my frontdoor can be picked just by staring at it really hard, so I can go out and fix it or replace it.
        If all I had to go one was "somethings wrong with the door" what the hell are we supposed to do about it?
        Sure other criminals might now about it now. But I've already fixed the problem, or I have a contingency shotgun in place, etc.
      • Can you go check to see what day eEye released their disclosure?

        Hint: June 18, 2001

        Now go check to see what day Microsoft released their disclosure?

        Hint: June 18, 2001

        What was this sledgehammer?
  • The only way that companies can ultimately be made responsible is through full disclosure. The only way you can know if they actually fixed or patched anything is through full disclosure. If you don't trust your vendor enough either you can stop using them or you can check them against what they say. Even if the real number is 20 million that is nothing compared to MS or any other company not fixing the problem or doing a poor job of it. Remember that software companies are judgment proof unlike other types of companies so you can't even add the risk cost of litigation into the equation like you could with pharnaceuticals or automobiles.
  • This is such a blatant antagonism I can't believe it's still making the rounds. The note was posted to Bugtraq and has become the poster child for the anti-full-disclosure crews.

    Slashdot is trolling us--successfully, it looks like. Too bad we can't mod the story itself down!

    This debate is hashed through time and time again, and solved--time and time again. Anyone who argues against full disclosure has never been a system admin or been deep enough into someone else's exploit-ridden code to feel the pain.

    Exploit disclosures are like the work-saving packages collections of *BSD. Someone else has done the work for you. For those who are in the know this means we don't have to fire up our own copy of IDA or grep spaghetti source code to figure out what the heck is going on.

    Why should I worry about the lower forms of system admin life and hold their ignorant hands when my more important systems and the systems of my clients are at stake? If you can't stand the heat, bloody well get out the kitchen and leave the work for someone who knows what they're doing.
  • by jsse ( 254124 ) on Thursday August 16, 2001 @10:29AM (#2137788) Homepage Journal
    Wouldn't it have been much better for eEye to give the details of the buffer overflow only to Microsoft?

    That is based on the assumption that Microsoft would take immediate action for the benefit of the society.

    Ok, take a look at this [theregister.co.uk]:
    The update, which amounts to a point release for both IIS 4 and IIS 5, also addresses five previously undisclosed vulnerabilities with IIS, which could result in either denial of service or privilege elevation.

    Five undisclosed vulnerabilities! Smart crackers might have enjoyed exploiting them for months!

    When would Microsoft disclosed fixes of those vulnerabilities? Next Service Patch? Does it mean they wouldn't be fixed if not for this CR instance?

    How could you rely on a group of people whose actions are unaccountable?

    Ok, you can mod me troll now if you don't like I speak ill of Microsoft.
    • Five undisclosed vulnerabilities! Smart crackers might have enjoyed exploiting them for months!

      Is it not also possible that they discovered these vulnerabilities in their own testing, and decided to fix them before an exploit appeared? Would you fault them for that? I know how the herd likes to jump to the most negative possible conclusions regarding everything MS does, but simple honesty requires that we at least mention and consider alternative explanations.

      • Is it not also possible that they discovered these vulnerabilities in their own testing, and decided to fix them before an exploit appeared?

        It's possible, but we all see that they prefer to hide them til next service patch release. If not for this CR instance we wouldn't know. They've 'hot-fix'(in-between service patches) system, but we don't see any hot-fix for these vulnerabilities.

        That gives crackers plenty time to rob general public blind. That's what we are worrying about. I've heard somewhere that Russians mafias are constantly cracking US companies' webservers; they probably wouldn't publicize the vulnerability like eeyes did if they discovered it in the first place.

        We do not jump into the most negative conclusions for nothing.
  • Me: Hmm... This is pretty serious shiznit! Maybe I should disable indexing on my company's IIS server...

    Boss: Okay. You do that.

    Me: Clickity Clickity Click...

    ~ ONE MONTH LATER ~

    Boss: Oh god! The world's going to end! Our profits are going to dry up. Code Red is trying to rape my daughters...

    Me: Relax, buddy! We've got it taken care of. Remember we disabled indexing on our IIS box? I also installed the patch from M$ just as soon as it came out.

    Boss: Oh... Okay.

    ~ ONE WEEK LATER ~

    Boss: Oh god! The world's going to end! Our profits are going to dry up. Code Red II tried to assault me behind the bar last night.

    Me: Relax, buddy! Remember, we disabled indexing on our IIS box and installed the patch. Remember?

    Boss: We did that?

    The lesson here isthat EEye was perfectly responsible in releasing the information, and did so with the knowledge of Microsoft, so that M$ could release a patch in time to fix a hole that could have halted governments. As bad as it was, it could have been worse.
  • There's also a series of articles on ntbugtraq talking about this issue. Russ Cooper is a huge proponent of what he calls 'Responsible Disclosure', whereby basically a mailing list is created which only has subscribers from people in the industry. Namely vendors of the security products, and of the OS and other tools.

    This is the process that is used in the Virus community today, and it's been working well.

    One of the points Russ made was that eEye could have discussed this issue on the mailing list before issuing a press release. In addition to feedback having been given by Microsoft, there would have been peer review. Additional information would have resulted, clarifications on impact and so forth.

    Then a final press release could have went out, giving eEye full credit for finding the issue, but providing a wealth of useful information to the end-user/admin type folks.

    Russ also raised a point about eEye's motivation. Why do they insist on not only full disclosure, but also releasing exploit code? Again he raises a good point, and I think it's quite clear.

    eEye is in business to sell a product which supposedly protects you against these types of attacks before they happen. So it is in there best interests that an attack is quickly released and spreads rapidly, thus generating mass hysteria. Only with such hysteria can they generate traffic to their site and obtain orders for their product.

    • Russ also raised a point about eEye's motivation. Why do they insist on not only full disclosure, but also releasing exploit code? Again he raises a good point, and I think it's quite clear.

      This question sort of assumes that if eEye doesn't release the exploit code, the code won't be written. On the contrary - exploit code is often referred to as proof of exploit code and is intended to show that the vulnerability indeed exists. Otherwise, vendors have a habit of stonewalling: "that vulerability is entirely theoretical."

  • eEye's response (Score:2, Informative)

    by TazMainiac ( 32739 )
    eEye's response,

    http://lwn.net/2001/0816/a/eeye.php3

    clearly points out that they provided *NOTHING* to the virus writer, and in fact the virus writer used another virus as a template. The criticism in this
    case is quite unfounded.
  • In yesterday's Crypto-Gram [counterpane.com], Bruce Schneier took the unexpected stance of also putting part of the Code Red blame on eEye. One particularly salient quote: "You can argue that eEye did the right thing by publicizing this vulnerability, but I personally am getting a little tired of them adding weapons to hackers' arsenals."

    What's the world coming to when everyone's favorite security guru starts blaming the messenger, too?

    • You can argue that eEye did the right thing by publicizing this vulnerability, but I personally am getting a little tired of them adding weapons to hackers' arsenals.

      I could equally argue that security is a choice; would you use or endorse a product that has had a bad rep with regards to security and the necessary actions required to correct security flaws?

      Considering that the vulnerability that Code Red exploits could have been fixed months before, I as a system administrator myself have to ask the following...

      1. Do these administrators even bother to read security advisories with respect to the products they use?
      2. Did they bother try to patch their systems?
      3. If they didn't bother to try patching their systems, did they take some other form of corrective action to avoid the potential problems?

      If you were to survey those affected by Code Red, you would probably get a lot of "NO" responses.

      These are probably the same individuals or companies that don't take security seriously. Security is a lot more than just applying patches whenever vulnerabilities are found. It should influence how your network is designed, how the services are partitioned on that network, and what information is allowed to travel in and out of your network, etc.

      <plug for egress filtering>

      If people actually took the time to look at their network architectures, they probably would be hard pressed to find a valid reason as to why their web servers would need to initiate a connection to another web server outside of their network.

      </plug for egress filtering>

      I therefore submit that full disclosure of the IIS vulnerability had no real influence as to the spread of Code Red. This problem could have been avoided by applying a fix, but some parties made a choice to do nothing. The spread of Code Red is a realized consequence of that choice.

  • Old News (Score:3, Informative)

    by looie ( 9995 ) <michael@trollope.org> on Thursday August 16, 2001 @01:23PM (#2141415) Homepage
    First of all, this email and the ensuing fracas are old news. If they haven't already been covered on slashdot, then that just shows that people here are behind the curve. It's been gummed to death in numerous other places.

    Second, eEye already pointed out that the author of the email actually knew nothing about the issue, as the exploit had been used months before they posted their description.

    Here's part of a response to that email, from an employee of eEye:

    Lets get the facts straight. CodeRed is based off of another worm that was written for a .htr ISAPI buffer overflow. CodeRed is an almost identical copy of the .htr worm. A worm which was released back in April. A worm which exploited an UNPUBLISHED vulnerability within IIS which was silently patched by Microsoft without notification to anyone. Therefore IDS vendors never had a signature and the .htr worm went unnoticed. To bad a security company had not found the flaw, then there would have been details, signatures made, and IDS systems would have detected the first instance of CodeRed back in April.

    Okay, so the guy who wrote the letter blaming eEye was a fool, who spouted off without possession of any facts in the case. But, it looks like he has a lot of company on slashdot. Maybe, they ought to rename the site 'slashdork.'

    mp

  • ...malicious virii are =ALWAYS= possible. Disclosing the information doesn't change the fact that the information already existed to be found.

    This is why "security through obscurity" is a paradox inside an absurdity. From the moment a piece of software is available, ALL information contained therein is ALSO available. And that includes information on buffer overflows, other exploits, etc.

    Since releasing the information doesn't actually add any information to what is already out there (it merely changes it into a more "readable" form), it makes no sense to question the release of such information.

    It is perfectly true that, on the release of information, those who are less willing to examine the code can also find the exploits, but if they haven't the initiative to do the work themselves, they really shouldn't be considered the problem.

    The problem remains those who DO do the work themselves, who DO find, identify, and utilize exploits that are NEVER published. Those people are the real danger.

    A skript-kiddie downloading a million credit card numbers is an irritant. They just cause a million people to have to put stops out, and get fresh cards. No big deal. Cards get lost or compromised, then stopped, all the time. The problem will be spotted quickly, and the most severe damage will be to a few people's egos.

    Place a black-hat in the same situation. The exploit is unknown, unpublished and goes unfixed. Minor "glitches" go unreported and unnoticed. The few complaints are passed off as stupid customers or bank errors. By the time the enormity of what has happened is detected - assuming it ever is - the perp might well be half-way to -owning- a South American country. Chances are, though, that it'll never be detected.

    From what I have heard, US banks lose something like $150,000,000,000 a year to computer crackers. That's a LOT. Let's be honest, here. Given that kind of write-off, can they really lose any more money by being open?

    Regular companies and corporations are so big and so complex, that your average black-hat could walk in with an elephant and walk out with the payroll mainframe, without anyone noticing for several weeks.

    Attempting to keep intruders out by not telling them things is like trying to keep out a burglar by not having a front gate. What good is that going to do? If the burglar can see a door or a window, then the lack of a gate obscures exactly nothing.

    You can achieve -some- security through having guards, monitors, etc. The military works this way, and it's generally pretty effective. IMHO, though, working on the holes is even better. And you can't do that, if you're busy pretending that there aren't any there to work on.

  • by Masem ( 1171 ) on Thursday August 16, 2001 @10:39AM (#2155249)
    Another factor to add in is how UCITA (and to some effect, DMCA) will affect the reporting of bugs. Of the key provisions in that legislation is that a company could make it against the law via the click-through license to 'critique' the software package without permission or to try to reverse-engineering protocols. Since many security bugs are typically found in either of these two activities, it may be illegal to simply access the security of a piece of software, or at least to access and release those results to the world. Which is another reason to make sure that if your state is considering UCITA to emphasis that it will hurt the security sector and could prove harmful in the long run.

  • Now even the [cr|h]ackers are anti-disclosure:

    Remeber sourceforge getting cracked a couple of months back? Apparantly, the guy who did this spoke to securityfocus.com [securityfocus.com] about th attack. In this article [securityfocus.com] he says:

    "i hack, dot slash or whatever you might want to call it, i do not write my own exploits, i use other people's stuff, and no im not anti-open source, i am however anti-sec. i support the anti-disclosure movement among the computer and network security communities,"

    Furthermore, the cracker said he works as a contractor in the field of security, and perhaps it is the ease of cracking so many sites using nothing but published exploits that makes him support the "anti-disclosure movement."

    Although I am personally not against full- or partial- disclosure per se, I do think the anti-disclosure movement has a valid point. There does seem to have been a huge increase in cracking activities recently, and although the script-kiddie phenomenon is at least partially due to the rise of the internet/home computer (i.e. more kids with cheap pc's in their bedrooms), I do think that the current fashion for open-disclosure means that security holes spread into the black-hat community faster than most sysadmins apply patches.

    On the other hand, if we go back to the anti-disclosure, it will be like pre 90s. The white hats will know one set of holes. The black hats will know a differnet (far more limited) set of security holes. This scenario obviously poses a whole set of different problems.

  • Let's just phrase it this way...

    Wouldn't cars be safer if there were not public reporting of defects, if safety information went only to the car makers, so they could fix the problem?

    I realize this anology has a flaw, being passive defects vs giving attackers key information.

    So how about information about the security of your bank. For the moment, let's leave IT out of it. As a consumer, I'd like to know if a given bank is sloppy about their security, because I can vote with my money, and where I put it. A key part of a free market is an informed consumer, and withholding information from the consumer is tampering with the marketplace. (gag terms in software licenses, anyone?)

    I can certainly agree with giving the software developer advance notice, but the key word is 'advance', not exclusive or secret.

    To take it back to cars, isn't part of just about any safety-related lawsuit a gag order? Is critical information being withheld that would help my decide on a safer car to put my family in?

I've noticed several design suggestions in your code.

Working...