Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Responsible Disclosure — 16 Opinions 87

An anonymous reader writes, "Disclosure. Just a word, but in the security field it is the root of progress, sharing knowledge and getting bugs fixed. SecurityFocus published an interesting collection of quotes about the best disclosure processes. The article features 11 big vendors, 2 buyers of vulnerabilities, and 3 independent researchers. What emerges is a subtle picture of the way vendors and researchers differ over how much elapsed time constitutes 'responsible.' Whereas vendors ask for unlimited patience, independent researchers look for a real commitment to develop a patch in a short time. Nice read." Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."
This discussion has been archived. No new comments can be posted.

Responsible Disclosure — 16 Opinions

Comments Filter:
  • What - no quote from Cisco?

  • Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."

    Well, after tens of thousands of Slashdot nerds read this, I'm sure that'll change in a few minutes. :)
    • by jc42 ( 318812 )
      Look at the page's history. It took about 7 minutes for someone to create it. Of course, it's mostly a redirect to the Full_disclosure page. That may change if a bit of discussion shows that it's worthwhile to separate the topics.
  • Wikipedia (Score:4, Funny)

    by adavies42 ( 746183 ) on Thursday September 14, 2006 @09:49AM (#16103885)
    Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."

    It does now.

    • Re:Wikipedia (Score:5, Interesting)

      by Anonymous Coward on Thursday September 14, 2006 @11:10AM (#16104653)
      Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."
      Seems like the one should just be a redirect to the other. Or, better yet, maybe "responsible disclosure" should redirect to "anonymous disclosure".

      I attempted to get [multi-billion dollar company] to fix a gaping security hole that was well-known to persons using [their product] which has support & licensing fees upwards of $300,000 per year. Their response was to tell my management that I was a loose cannon and should be fired (luckily my management told them to get stuffed, but the hole still wasn't fixed).

      So I sent [multi-billion dollar company] an email from my infant daughter's email account (yes, I create accounts for my kids when they are born, shut up) informing them that the details of their security problem would be published on the bugtraq mailing list in two weeks, and attached a copy of what would be posted.

      In less than 48 hours, I was contacted at the "postmaster" address for the email domain by [multi-billion dollar company] who informed me that we (the domain's registered in the name of a friend of mine, so there's no visible connection to me) were harboring an evil criminal hacker at [email address of my daughter] and that I needed to give them personal information about that user. I replied "oh, gee, thanks, that account belongs to a two-year old child, somebody must have hacked it, we are shutting that account off now, have a nice day".

      Three days later all customers of [multi-billion dollar company] got an urgent update that corrected the security flaw in [their product]. I never did post to bugtraq, because the point of the exercise was to get [multi-billion dollar company] to do what was best for both them and their customers, and that goal was achieved. I couldn't have made the threat, though, without the existence of anonymous full disclosure listservs.
      • Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."
        Seems like the one should just be a redirect to the other.
        Why yes, that's exactly what I did when I created the "responsible disclosure" article.
  • It really wasn't so indepth, but it was interesting the statement by each of those companies. The one that impressed me the most was SAP, where the vendor and the researcher would agree on action to be taken at the time of the disclosure.

    I do think that the ethical approach is certainly approach a vendor first. Inform them that they have a given time to apply a patch to it, and then hold them to it and release the information at the end of that time.
  • ...the decision is easy. Publish the bugs after five days. This should be enough. They proved they can deliver patches after three days.
    • Are you sure you'd want to install anything that comes out of Microsoft with all of 5 days of coding, testing, QA, regression testing, validation, etc?

      I know I wouldn't. Give em 30 days at least.
  • by tygerstripes ( 832644 ) on Thursday September 14, 2006 @10:06AM (#16104032)
    I'm not particularly interested in exploits and such per se, but I found the article fascinating anyway. Sure, some of what they said was interesting - especially the researchers - but the most interesting thing was the tone of the Vendors' statements

    Seriously, have a look. If you're at all used to reading between the lines, their statements regarding security, disclosure etc give you a far greater insight into their real attitudes than any marketing, reviews or horror stories ever could.

    • by nolife ( 233813 )
      Well, you are really reading a response from one person. In theory that person was hired by that company and is representing the company and is speaking and providing the company line. In reality, that one person is still an individual person. The people behind that person may not practice the same attitude. I never assume one person can represent an entire group, even if they are paid to do that.
  • by John Fulmer ( 5840 ) on Thursday September 14, 2006 @10:06AM (#16104035)
    Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."

    It may be because 'full disclosure' has meaning in the security community, while 'responsible disclosure' does not.

    All 'responsible disclosure' is is a set of general ethics and courtesy that security researchers give programmers/companies/entities in order to make an orderly repair of a vulnerability. It is a function of 'full disclosure', not something in of itself.

    Slightly related: I've read things that liken 'full disclosure' to yelling "Fire!" in a crowded theater. I tend to think it of yelling "Fire!" in a theater made of flash paper doused in gasoline, while one of the jugglers is preparing to light his flaming torches.

    In other words, yelling 'FIRE!' is permissible, if there is actually a high likelyhood of fire...
    • If there is really a fire, or a likelihood of a fire, you should inform the management so they can make an announcement that doesn't set off panic, which could lead to people being trampled to death.

      In the case of security announcements, publicly disclosing a vulnerability before the vendor has been given time to get a patch out actually can cause a fire, because disclosing the vulnerability also allows anyone to create an exploit for the vulnerability.

      In essence, full disclosure isn't as bad as shoutin

      • by QuantumG ( 50515 ) <qg@biodome.org> on Thursday September 14, 2006 @10:35AM (#16104281) Homepage Journal
        Sorry, no, that's bullshit. If you wanna make stupid analogies, at least get them right. Calling "Fire!" in a crowded theatre is absolutely perfectly ok, if there is a fire. However, if you know there is a fire and know that people will, sooner or later, get burnt, going for a stroll to the front office and asking to talk to the manager, tell him there is a fire, and have him say "Yeah, we'll get to that in about 120 days, on average" is not ethical. It's not responsible. It's participating in a conspiracy that belittles the people in the theatre and hampers their ability to make a valid risk assessment.
      • If there is really a fire, or a likelihood of a fire, you should inform the management so they can make an announcement that doesn't set off panic, which could lead to people being trampled to death.

        Do that, and everyone in the theater is likely to die.

        By the time you get to management and inform them (and, in many cases, convince them there really is a fire), and they can get to their PA system, the fire will probably have spread throughout the entire theater. In case of a fire, time is extremely important
        • Same goes for exploit disclosure. If an exploit is found, it might be okay to keep it quiet for a little while, as there is a high probability of a fire rather than an actual fire. But the longer you wait, the more likely somebody else a little less honorable will also find the exploit.

          I completely agree. Immediate disclosure does nothing but help the bad guys. Staying quiet about it too long helps the bad guys, too. The only question is, what is the proper amount of time to wait after a vulnerability is

          • Immediate disclosure does nothing but help the bad guys. Staying quiet about it too long helps the bad guys, too. The only question is, what is the proper amount of time to wait[?]

            It varies depending on the complexity of the problem that requires fixing, and the resources available for the fix. Here's a couple of concrete examples:

            You found a bug in Sendmail. You contact Eric Allman, you ask him how long it will take him to get his downstreams to push fixes through the distribution channels, and you agr

  • by QuantumG ( 50515 ) <qg@biodome.org> on Thursday September 14, 2006 @10:09AM (#16104056) Homepage Journal
    The responsible vendor takes time to vet the problem within their own lab. They have to develop a patch, [they] Quality Control it and then publish the patch. Microsoft and Oracle average about 120 days to do this.

    So, in order to be "responsible" you have to keep the vulnerability secret for 120 days. Four months. You're kiding right? Say I'm an independant researcher. I find this vulnerability using no special skills and publically available tools. Clearly a highly skilled blackhat could just as likely have found the same vulnerability as me. Let's suppose that I've found this vulnerability in the first 2 days of a new release of the product under inspection. The blackhat could well have discovered it in the same number of days, but let's say it takes him a month longer than me, just to be generous. I'm supposed to sit on this vulnerability and let the blackhat break into systems using it for how long? 3 months? This is responsible? Wouldn't it be more responsible if I were to go public immediately? Obviously publishing tools which script kiddies can use to attack people is not a good idea, that's not what we're talking about. Surely I should at least tell people that I have found a vulnerability and that the software in question is not, in my opinion, something that you should be using if you care about security. Isn't my failure to do this just make me complacent in a conspiracy to hide that fact that people may be breaking into systems using this vulnerability?

    What if I'm an IDS manufacturer? I start getting alarms that shell code has been detected in a protocol stream that has never before seen shell code in it. Analysing the incident I discover that there is a vulnerability in a particular daemon which these attackers are using to gain unauthorised access. Who should I inform? The vendor of that daemon? My customers? Or the general public? This is no longer a theoretical "the bad guys might know too" situation, this is a widespread pattern of attack that I have detected indicating that real harm is being done. If I fail to inform the public immediately, am I not complacent in helping nto more computers? Doesn't sound very responsible to me.
    • On the other hand, if a security update is only days away, full disclosure of the vulnerability won't get a fix in the next update. It takes time to write and test that a fix doesn't introduce new bugs. Additionally, what if the black hats haven't found the vulnerability yet? By announcing the vulnerability immediately after discovery, you haven't helped get the fix out sooner, and worse, you've made it more likely for an exploit to be developed.

      Announcing a vulnerability immediately doesn't seem responsib

      • by QuantumG ( 50515 )
        Two weeks ok, fine, take your time and test it, cause maybe no-one is being broken into right now. Four months? No-one should be exposed for that long. It's guarenteed to be found and exploited. And what if we know people are exploiting this vulnerability right now? Does that change our response? It should. There should be a hot fix or at least an advisory to disable the service (or the relevant portions) put out immediately. And how about releasing a signature all those people running intrusion det
        • I agree four months seems like an excessively long time to stay quiet about a vulnerability, especially a serious one. Serious vulnerabilities should be fixed in the next security patch, unless the next one is too close for adequate testing.

          In the case of a vulnerability that is being exploited, I agree that immediate action is necessary. However, with full disclosure there's always the possibility the some, or even most, black hats don't know about the vulnerability. In that case, again you've just made

          • by QuantumG ( 50515 )
            Yes, absolutely. But that's not the definition of "responsible disclosure" that the vendor would advocate. Because even the hint of a vulnerability without a patch available is bad for business.
            • It's not what some vendors would advocate, but it's what is currently listed in Wikipedia as the description for responsible disclosure [wikipedia.org]:

              Some believe that in the absence of any public exploits for the problem, full and public disclosure should be preceded by disclosure of the vulnerability to the vendors or authors of the system. This private advance disclosure allows the vendor time to produce a fix or workaround. This philosophy is sometimes called "responsible disclosure".

              • by QuantumG ( 50515 )
                I read that paragraph as saying that no public disclosure of the issue before disclosing to the vendor is acceptable, no matter how vague.
        • You ASSume that the problem is not being solved in a reasonable timeframe, and that public disclosure will accelerate the fix. What if that isn't the case. There are many exploits that are known to the real blackhats, that they keep to themselves and don't share. If a security researcher discovers one of these (he may even know that it is in the wild) should he immediately tell the world? Then a vulnerability that was once only exploited by a small group is know exploitable by ever 1337 cr4ck3r and scri
          • Re: (Score:3, Informative)

            by Todd Knarr ( 15451 )

            From a study reported on in the WSJ back in January [washingtonpost.com], and elaborated on later [washingtonpost.com], Microsoft's time to patch vulnerabilities they classify as "critical" has risen 25% since 2003, to 134 days. Except, however, in the case of full-disclosure vulnerabilities, where details and almost always proof-of-concept code were released to the general public. For those vulnerabilities, the time to fix fell from 71 days in 2003 to 46 days in 2005. Based on the data, full disclosure does in fact accelerate the fix and the probl

            • Without knowing more about the quality of those fixes, your statistics are worse than meaningless; they are potentially misleading.
    • Obviously publishing tools which script kiddies can use to attack people is not a good idea, that's not what we're talking about. Surely I should at least tell people that I have found a vulnerability and that the software in question is not, in my opinion, something that you should be using if you care about security.

      But if the bad guys haven't found the problem at this point, they surely will after this kind of announcement. Moreover, changing running production software can be very difficult. In thi

      • by QuantumG ( 50515 )
        Are you trying to suggest that we shouldn't provide the right information for people who effectively manage their risk because some people are incapable of doing so? If an independant security analyst can find a vulnerability with no special tools or knowledge, then it is equally likely that a blackhat has found it. In fact, it's a lot more likely, as we suspect their are a hell of a lot more "bad guys" than there are "good guys".
    • There's the opposite side of that story, you know.

      You happen upon an easy vulnerability. A blackhat finds it in a month. You stay quiet for 4 months. Patch comes after a full year from when you find it. A single blackhat has used it for a year.

      You happen upon an easy vulnerability. You announce it to the public. Every half-assed blackhat in the world finds it and uses it for a full year before the patch comes out.
      • by QuantumG ( 50515 )
        If you announce it to the public and it still takes a year to patch then those people who are using the product in question should not be using that product. From a security point of view they are a lost cause. That said, it really depends how you announce it. If you say "I have found a serious vulnerability in product X, I recommend people don't use it until a patch for this issue has been released" then you have in no way reduced the time it will take a blackhat to find the vulnerability and produce an
        • by Aladrin ( 926209 )
          If you just say 'I found a vulnerability in Product X' nobody will listen seriously. A few will say 'where', which you can't respond to under your own example. Even those few will then ignore you without any proof of the problem.

          If you say 'I found a vulnerability in Product X when you do Y', even without any details, the blackhats already know where to look and the kind of things to look for. For example, if you tell me that a program has a vulnerability related to images, I'm immediately going to think
          • by QuantumG ( 50515 )
            Meh. If people ignore your vague warning and then get burned, they'll listen to you next time. If they don't get burned then no problem.
            • by Aladrin ( 926209 )
              If my name was RMS or Linus, I could expect that, yes. But in the world of IT, I'm nobody. Anyone who heeds just anyone who screams is crazy. You have to have a way to filter out the liars, cheats, scammers, etc. And lack of proof is pretty compelling.
              • by QuantumG ( 50515 )
                well, obviously you have to build a reputation for yourself, but if you never proclaim anything then no-one will ever believe any of your warnings.
        • by Qzukk ( 229616 )
          Perhaps, then, the most responsible thing to do is to provide a proof of concept to the software company to prove that your bug is serious, and then post publically "I have found a bug in foobar. By restricting network access to the whammy service to trusted systems, you can mitigate the risk of attack. A proof of concept has been provided to the company but will not be made available to the public."
    • Amen Amen, On top of that, just think of all the poor free software developers who can't sit on the information for 120 days and have to fix it ASAP, because the only way to disclose the bug is to post to a ::gasp:: public mailing list! Yes it sucks that people don't patch their systems, yes it is terrible that people can use publically disclosed information to attack systems, but this increased level of secrecy only gives you the illusion of security. I'm probably not the only person on /. who has had s
    • Obviously publishing tools which script kiddies can use to attack people is not a good idea, that's not what we're talking about. Surely I should at least tell people that I have found a vulnerability and that the software in question is not, in my opinion, something that you should be using if you care about security.

      I don't think a hard and fast 120 days rule makes sense, but I think a researcher should look at the characteristics of the vulnerability before deciding what is responsible, as well as th

    • The compromise that makes the most sense to me is RFPolicy [wiretrip.net]. Put simply, this provides a 5-day contact period, and requires the vendor to keep the reporter notified of the status of the fix. Time to actual disclosure is then based on how cooperative the vendor is being. This (in theory) ensures a fix in a reasonable time frame, from the point of view of the reporter, while suggesting that the disclosure of the vulnerability should be held back as appropriate in order to do a proper fix, and giving good timel
  • If I were Microsoft (Score:5, Interesting)

    by Lord Ender ( 156273 ) on Thursday September 14, 2006 @10:10AM (#16104062) Homepage
    If I were deciding policy for MS or any other big vendor, I would publish a "hush money" policy on security vulnerabilities.

    Basically, it would go like this:

    "If you discover a vlunerability and report it only to us, when we eventually release the patch, we will give you credit for discovering it (what researchers really want), and we will give you $10,000. If you report it to anyone else before we release the patch, you will get no money and no credit."
    • Re: (Score:2, Interesting)

      by QuantumG ( 50515 )
      I happen to know some people who have exactly that relationship with Microsoft.. except for the whole credit part.. they sell the rights to that along with their soul. Of course, seeing as I'm not about to give up the names of these people, you'll just have to take my word for it (or call me a lier, whichever you prefer). Microsoft doesn't make this policy public because it is out and out unethical. People have a right to know the risk of running Microsoft software, but so many security flaws are fixed i
    • by SensitiveMale ( 155605 ) on Thursday September 14, 2006 @10:26AM (#16104201)
      "If you discover a vlunerability and report it only to us, when we eventually release the patch, we will give you credit for discovering it (what researchers really want), and we will give you $10,000.

      $10,000 per bug would bankrupt microsoft.
    • by Faylone ( 880739 )
      With all the vulnerablities they have, couldn't somebody find enough to leave them bankrupt?
    • Instead of a flat $10,000 why not $1000 dollars a day with the first five days free, this way if the problem is fixed in 5 days you don't get any money , but still get the credit you wanted. If microsoft decides it takes 120 days then they pay $115,000.

      If $1000 a day seems a little high I would agree to a multi-tiered pay scale based on severity, with the clause that if I later find a way to use the same vulnerability to do worse than what I came up with it would retroactively move me up the scale.
    • If you found a security vulnerability in Windows, why would you trust Microsoft?

      In your example, Microsoft has a $10,000 incentive to NEVER release a patch or give you credit for discovering it.

      Will MS claim that the vulnerability was discovered in-house DAYS before you told them of it? What happens if you tell MS about the vulnerability and another researcher publishes the vulnerability while you have been patiently waiting several months for a patch and your check? If you tell MS about the vulnerability
      • Did you see the word "first" in there? The suggestion is that anyone who independently discovers and reports the vulnerability before the patch is released gets paid. That gives MS motivation to patch more quickly.

        And if they decide to never patch, there is nothing to stop the researcher from publishing it 0-day, anyway.

        But I didn't say this was what is best for everyone. I said this would be a good one for MS, because they would get all the time they need to fix the problems, and encourage people to come t
        • by Secrity ( 742221 )
          To me, "we will give you credit for discovering it" implied "first". Only the FIRST person to discover something can say that they discovered it. There isn't a whole lot of use to be simply one of the people to discover a vulnerability. I agree that this would be the sort of program that MS would love to have, especially because they hold all of the cards, and they are the dealer -- until one of the players gets tired of playing Microsoft's game.

          Researcher A discovers the vulnerability first and repor
          • That's true. But since a lot of these things are discovered by researchers in countries with failed economies (like former USSR), $10kUS would be worth keeping quiet about.

            And I am sure avoiding a 0-day exploit is worth more than $10k to MS.
    • Checkthis out: zerodayinitiative [zerodayinitiative.com]

      It's actually better than the parent's proposal, because you're not directly dependant on the company you've exploited the software of.
  • I think everybody agrees is that the first thing to do is to notify the vendor. Next, work on a fix has to be started. The question is, what comes after that?

    Should users be notified ASAP, so that they are aware of the issue? There is something to be said for this. After all, if I found a vulnerability, somebody else may have found it, too. The sooner users know of the risk, the sooner they can take steps to reduce it. On the other hand, once you notify users, you can be sure the black hats know of the vuln
    • Re: (Score:3, Insightful)

      Well, there may be a middle ground between full disclosure and no disclosure. In certain situations you might be able to just disclose the danger and how to avoid it, without actually disclosing enough details for black hats to exploit it (although it of course gives them a hint where to search).

      For example, "If you don't absolutely need it, switch off functionality X in product Y. I've found a serious vulnerabily in Y which is only effective if the option for X is set. An attacker might take control over y
      • This is exactly what the people who discovered the alleged Apple airport exploit did, ("We can pwn this Macbook with one click. Stop using WiFi on Apple machines") and look what it got them. Without a real, independantly verified exploit to show the world, they've been accused of rigging the demo and engaging in OS fanboyism.

        If they had printed the exploit code on a T-shirt and handed it out at BlackHat, either the driver would be fixed or their specious accusations would be debunked by now. Instead, it'
    • The question is, what comes after that?

      Exactly. Though there is no clearcut answer to this behalf.
      It depends largely on the character of the exploit, I'd suggest. If you can stop it at the parameter firewall, at a non-standard port, tell the sysadmins to close that bloody port for security reasons.
      If it can DoS your DNS by sending a specially crafted request, I'd suggest to leave it unpublished for a reasonable time. Not to invite the kids to find out what it is and DoS half of the DNSes for contempt.

  • I think we (meaning current and future users) need more information. More information on what sorts of vulnerabilities are most commonly exploited, and the consequences of these exploits. Information about what sort of vulnerabilities are most commonly found. Information about what we, independently from the vendors, can do to protect ourselves. And, importantly, about the security track record of vendors: which vendors get the most and severest vulnerabilities reported against them, and how do they deal wi
    • "what if, for example, there are many more people looking for issues in MSIE than in Firefox?"

      "Firefox might seem more secure, while actually MSIE is the more secure of the two!"

      "Or what if Microsoft hires some brilliant minds to find holes in Ubuntu"

      "whereas the people examining Windows have a hard time because (1) they don't have source code"

      IF I can rephrase that .. MSIE is more secure because people don't have access to the source and less people are looking for 'issues' in Firefox. Even
      • by Tim C ( 15259 )
        Even if the above were true it defies logic that a browser is more/less secure because of the number of people examining it.

        "Many eyes make shallow bugs" is the relevant quote - the idea is that with more people looking at something, you have a greater chance of spotting flaws.

        A bug in MSIE leads to the whole computer being compromised.

        Only if you run as admin, which admittedly is depressingly common in the Windows world.
  • that is if you read TFA of course. The average response is (except from IBM and Microsoft): oh, tell us, here is the e-mailaddress, we will do this, this and that, make a patch and then disclose the information. IBM is like: oh, tell us, we put it in the database and try finding a resolution. MIcrosoft says: Tell us, euhm... yeah, that's it, don't go tell anyone else, just us.

    I mean, at least describe what an average process looks like and possible timeframes etc.
  • Blunt and honest. "Responsible disclosure" sounds like a new buzzword to suggest that full disclosure is irresponsible. It isn't. The only reasonable and sensible way to inform about a bug is by providing all information necessary to

    see it
    identify it
    reproduce it
    fix it

    Yes, I may not be able to fix a bug in Windows. But with full disclosure I can reproduce it and find a stopgag that provides a temp fix 'til MS comes out of their rears.

    If you only provide limited information, I cannot test my workaround agains
  • Wouldn't knowing about a security exploit and failing to mitigate any damage that may result by failing to disclose to your customers this information make a company liable for any damages done to the customer's systems during the "grace period"?

    I mean if I buy something, say a car, and the manufacturer knows about the defect, I can sue them for any damages that may occur as a result of their design flaw. Companies perform recalls because the cost of such suits exceeds the cost of replacing the goods in
  • Good morning, Mr. Deadhorse. I'm here to beat you.
  • and after careful consideration of the issues, I have concluded that the appropriate amount of time to wait before going public is 10.853 days.

Thus spake the master programmer: "When a program is being tested, it is too late to make design changes." -- Geoffrey James, "The Tao of Programming"