Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Schneier On Full Disclosure 232

Bruce let me know that he's written a piece on ZDNet (original home of the for the Window of Exposure idea is on Counterpane ? ) about the problems of not following full disclosure. Very well written and does a great job of summarizing why full disclosure works. The original piece from Culp @ Microsoft is also available, along with the PowerPoint that they did.
This discussion has been archived. No new comments can be posted.

Schneier On Full Disclosure

Comments Filter:
  • Remember! (Score:3, Funny)

    by athakur999 ( 44340 ) on Tuesday November 13, 2001 @05:59PM (#2560508) Journal
    Full disclosure may be good, but full exposure will get you thrown in jail!
  • This could be the start of the end for MS. Since Full Disclosure is obviously the only way to go, and seeing as MS's software is pretty buggy and not very secure (mainly out of the box), they are proving to the world that they don't want people to know just exactly how buggy their software is.
    • but by the same token, releasing information about a vulnerability is admitting that your application is flawed. This also harms the reputation of your product among some user groups. With Windows XP Microsoft has conclusively proven that their target market is People Who Don't Know What A Mouse Is; these are the same people who would react most negatively to MS security alerts.
    • Full disclosure is the way?
      I know this, you know this, the marketing team of Microsoft (or any other software, hardware, car, screws, whatever vendor) don't. Admitting a vunlerability is admitting a flaw.
      "If a product is on sale, it has no flaws" is what marketeers repeat to themselves like a mantra, it doesn't matter whether said product might even not be working.
  • would you extend these arguments to support it in non-virtual security? Should the CIA and other international organizations use full exposure? Should they publish something titled, "This is the vulnerability of our Nuclear Piles"? "This is where you can cross the border undetected", "This is how to make a Fake ID?"
    • by sphealey ( 2855 ) on Tuesday November 13, 2001 @06:15PM (#2560591)
      would you extend these arguments to support it in non-virtual security? Should the CIA and other international organizations use full exposure? Should they publish something titled, "This is the vulnerability of our Nuclear Piles"?
      Unfortunately, it isn't that simple. Read the history of the Manhatten Project. The FBI actually succeeded in its goal of not allowing a single leak of information out of the project [1]. It was the lack of published information on atomic research in the US in 1940 and 1941 that told Kurchatov that something was "up" and motiviated him to write a letter to Stalin suggesting that the Soviet Union get moving on atomic bomb research.

      So just hiding information doesn't necessarily make you more secure.

      sPh

      [1] OK, the Soviet Union had spies inside the project before it started, but that doesn't count!

      • OK, the Soviet Union had spies inside the project before it started, but that doesn't count!

        How does that not count? In fact, how does that not discredit the notion that the lack of information clued the Soviets to the existence of a cover-up?

        More to the point, who is going to assume that their software is insecure based on the lack of security updates? I'm not sure that Cold-War paranoia translates to the consumer software market so readily.
      • I've heard reports that one of the things that raised questions was "Where did all the silver go," but while it's clear that it was used I haven't found any notes about what impact (if any) this might have had on market prices.

        Copper was being used elsewhere in the war effort, so:

        At one point during the Manhattan Project, they needed a lot of copper. They were going to build plants in Utah to manufacture uranium and needed an estimated 10,000 to 15,000 metric tons of copper. Unfortunately, due to other war requirements, this much copper was not available. Someone suggested that the Manhattan Project go to the United States Treasury and ask for silver. Which they did.
        and
        For the record we should note two things about our story. First, the Manhattan Project eventually used somewhere around 13,000 metric tons of silver. A current valuation would be about $6,000,000,000. Second, they gave it all back.
        Swiped from http://members.aol.com/fmcguff/dwmodel/intro.htm [aol.com]

        • Someone suggested that the Manhattan Project go to the United States Treasury and ask for silver.

          Of course, this was before somebody suggesting using Uranium and Plutonium. They gave the silver back because it wouldn't blow up. Uranium makes really lousy money on the other hand. Is has a good weight, and it's a bit warm to the touch, giving it a nice feel in your hands. But it tended to cause tumors on the upper thigh, right where trouser's pockets are. So for the treasury and the war department, it was what you'd call a "win-win situation".
      • Unfortunately, it isn't that simple. Read the history of the Manhatten Project. The FBI actually succeeded in its goal of not allowing a single leak of information out of the project [1].

        You're kidding, right? Anyone who's read Feynman's book [fatbrain.com] on the subject would know that the security was a joke. Fences with holes in them, inattentive guards, insecure safes, and poor whistleblowing policies were all part of the Manhattan Project's "security". Secondly, the security was handled by the military, not the FBI.

        It was the lack of published information on atomic research in the US in 1940 and 1941 that told Kurchatov that something was "up"

        Neat trick, since the Manhattan Project started in 1942. The absence of public information did tip off Kurchatov, but keeping your people from publishing in journals isn't hard. It's keeping spies from passing secrets to a foreign agent outside a diner 50 miles from the secure facility that presents a problem.

        [1] OK, the Soviet Union had spies inside the project before it started, but that doesn't count!

        David Greenglass, the mole who provided many of the secrets the Russians obtained from the Manhattan Project (and who served as a prosecution witness against the Rosenbergs), wasn't assigned to the project until 1944. There were of course other spies, and infiltrating before a project starts most definitely does count, but I felt like going after the factual error.

        • Neat trick, since the Manhattan Project started in 1942.
          The story is, as you indicate, much more complex, with dozens of people, places, dates, nation-states, and motivations involved. I agree my summary isn't the best possible, but as I have noted before Slashdot is a discussion forum, not a Master's program in history.

          However, you do seem to be forgetting the Tubealloys project and Klaus Fuchs, who was involved from 1938 and was one of the first from the British team to transfer information to the US. And to the Soviet Union as well, although that wasn't known at the time.

          Similiarly, research on military applications of fisson, and attempts to suppress knowledge of that research, occurred before the Manhatten Project was officially started (which actually happened pretty late in the game).

          sPh

    • by Anonymous Coward
      These things:
      1. This is the vulnerability of our Nuclear Piles
      2. This is where you can cross the border undetected
      3. This is how to make a Fake ID
      Should be told to people who are responsible for the security and administration of Nuclear Piles, Border crossing, Fake IDs. In the computer world ,people responsible for security and administration of their computers should be told of the problems.

      (ie some large part of the computer using world uses windows so full disclosure is good in that situation)
    • Should they publish something titled,
      "This is the vulnerability of our Nuclear Piles"?


      If there is a nuclear pile on the desktop of every home, then yes.

      "This is where you can cross the border undetected",

      If there is a border on the desktop of every home, then yes.

      "This is how to make a Fake ID?"

      If photo ID's are checked to allow access to the desktop of every home, then yes.

      Hope this answers your question.
    • by jmauro ( 32523 ) on Tuesday November 13, 2001 @06:28PM (#2560655)
      This is the vulnerability of our Nuclear Piles [animatedsoftware.com]

      This is where you can cross the border undetected [house.gov]

      This is how to make a Fake ID? [counterfeitlibrary.com]

      Well maybe I didn't say every single tiny little syllable but basically I said em, basicly.
      • From the second link:
        There is also the issue of terrorism. As a member of the Subcommittee on Crime, international terrorism is something that I have been working on for quite some time. We have had to deal with the bombings of the World Trade Center and let us not forget one of the most devastating and heinous acts of terrorism in American history which was perpetrated by Americans, the bombing of the Federal building in Oklahoma City.
        It's all relative.
    • In the case of national security, the government has strong motivations to fix any security leak they find. As Bruce Schneier has pointed out in the past, commercial software isn't held to the same high standards... although we're entering an era where perhaps it should be, at least in part.
    • if and when I have a nuclear stockpile installed in my backyard I'd certainly want the CIA to notify me of any vulnerabilities.

      But you analogy is seriously flawed. Governments, like all beaurocracies, strive first and foremost to avoid bad publicity and/or responsibility for their actions. That's why openness, accountability, and yes -- full disclosure are important. There is always a gray area in terms of giving the relevant corporation/agency advance notice and some limited exceptions for national security.

      But you need not worry about the balance tilting too far. The CIA might publish a guidebook on torture, but it wouldn't publish a guide on getting a fake ID/passport. Hence it's so rare for teenagers or illegal aliens to get any fake documents at all.

    • The CIA and such are, in this case, in the position of the vendors: it is their responsibility to fix the vulnerabilities.

      The disclosure should be done by people who identify the vulnerablities. If you know where you can cross a border undetected, you ought to let someone know. Particularly in that case, the hole would probably get closed pretty quickly. And if some random person notices a hole, it would be pretty easy for someone actually looking for a vulnerability to find it.

      For example, if in August (or before) someone had said to the general public something like, "You can probably hijack an airplane with legal objects and then destroy a building with it", the passengers wouldn't have let the hijacking get anywhere, and the hijackers probably wouldn't have tried. There's obviously the risk that some groups that wouldn't have thought of it would get the idea, but it would have gotten fixed in policy before anyone could do anything to exploit it.
      • For example, if in August (or before) someone had said to the general public something like, "You can probably hijack an airplane with legal objects and then destroy a building with it", the passengers wouldn't have let the hijacking get anywhere, and the hijackers probably wouldn't have tried.

        Good luck getting the desired results. Even after the fact people are still complaining about how the increase in airport security is mostly cosmetic. Not to mention the fact that if you are overheard even mentioning the word "bomb" in an airport, you are likely to be detained for a while. (This was true even before recent events...)

        The point is, people are always coming up with ideas, but the policy makers, and the people in charge simply don't have the desire, resources, or whatever to act on very many of them. How does suggesting a possible vulnerability in airport security motivate the responsible person or persons to actually implement a change?

    • would you extend these arguments to support it in non-virtual security?

      Yup.

      Should the CIA and other international organizations use full exposure? Should they publish something titled, "This is the vulnerability of our Nuclear Piles"? "This is where you can cross the border undetected", "This is how to make a Fake ID?"

      That's not quite the same. I no more expect the CIA to use full disclosure than Microsoft. Full disclosure is about third parties pointing out problems.

      A better analogy would be "Should anyone who wants be able to publish things like, "Guide to Lock Picking [lysator.liu.se]"? Sure enough, you can find works on picking locks, defeating car and home alarms, hotwiring cars, making fake ids, and a host of other real world security issues. And these works are good things. Individuals affected by these risks can use this information make their own judgements on how to protect themselves.

    • Full disclosure is meant to help increase security in dynamicly changing and (supposedly) supported software.

      You will note that if you read the article and this is probably the only time where "bug secrecy" is necessary, that is it extremely bad to publish a bug for non-fixable systems(like air traffic control computers). It is good in one sense that the exploit is known (so that they avoid it the next time) but it is bad to let it loose if the system is still deployed and can not be changed and aren't going away soon.

      So the continue the allogy, it isn't good to disclose vulnerabilities of nuclear stockpiles because you can't fix them.
    • Sure. I would argue that every nuclear power plant owner should be advised of any vunerabilities, just as every computer owner should. In fact, I'm sure this already happens.

      Telling "how to make a Fake ID" is very hard to distinguish from information that does get passed out about what the current best crop of fake IDs and counterfeit currency is.
    • would you extend these arguments to support it in non-virtual security?

      Depends on the circumstances. See below.

      Should the CIA and other international organizations use full exposure? Should they publish something titled, "This is the vulnerability of our Nuclear Piles"? "This is where you can cross the border undetected", "This is how to make a Fake ID?"

      To the general public? That would serve no beneficial purpose what-so-ever. To qualified people or professionals who may be able to help withn the problem at hand and/or counter the exploit? Youbetcherass they should. If they refuse to fix any of the problems discovered, in a reasonable amount of time, the whistle should be blown on them in full. The problem comes with the "Qualified Professional" part. IMHO, Culp does have a point (and Schneier seems to agree with me) that dangerous tools need to be kept out of hands that can and will do damage as much as possible. Would you just give a loaded gun to an angry child? (Turnabout is fair play, dude.)

      Some sort of professional org should be set up that distributes PGP keys (or some other security system) only to people who show they have the qualifications and need to access exploit and/or exploitable code. Then tools could be written that only are sent via secure, encrypted channels to those with the right keys - and hopefully kept out of the hands of script kiddies.

      And before you go off singleing out and bashing Microsoft yet again, remember all systems [slashdot.org] can have potentially dangerous and destructive security flaws. We need to do this as an industry, including everyone and anyone - even those in the industry [microsoft.com] we, ummm, have a few problems with.

      Soko
    • You are in luck (Score:3, Insightful)

      by Erris ( 531066 )
      Should the CIA and other international organizations use full exposure? Should they publish something titled, "This is the vulnerability of our Nuclear Piles"? "This is where you can cross the border undetected", "This is how to make a Fake ID?"

      Wow, what a troll. The CIA being an "international organization" is a dead give away. The other is the fantastic false analogy between buggy PC software and nuclear bombs. No orgainization currently mass produces nuclear weapons for daily use on every desktop. No one here would recomend such things.

      At the same time, some countries like the USA, recognize that free thought is needed for scientific development and that full disclosure and broad education are in the public interest. While the particular techincal details of how to build bombs is kept secret, the physical priciples are trumpeted and encouraged. Indeed public debate on priciples are encouraged as free dicourse leads to knowledge. "Freedom is the ability to say two plus two is four, all else follows", said George Orwells sad character in 1984. While the Department of Energy and their employees might not tell us details, they will not keep you or me from talking about it. With sufficient study at any good US University, a person can learn all they need to know about bomb design. Knowledge is not yet viewed as evil. The truth will set you free and only the free can be sure they know the truth.

      M$, Adobe, RIAA, MPAA and other private interests are going a step further than cold warriors with their "information anarchy" campaign. Such blatant censorship is un-American and against the public interest. They will be defeated in the long run, as will trolls like you.

    • If the result is them fixing the vunerabilities, Hell Yeah!! The whole point of full disclosure is to put pressure on the vendors to correct a mistake in programing, not to give hackers a hand in breaking into your system. But with out full disclosure, a vunerability becomes something that can be put off till later. Imagine how bad Red Worm and Nimda might have been had they come out, and Microsoft hadn't had a patch out because they decided there wasn't the pressure to fix the problem. Sticking your head in the sand won't make the problem go away, but it will make Microsoft happy.

      OT: I saw an ad attached to the article say "When you're thinking Microsoft Windows XP, think AMD Athlon XP." Kinda makes me want to by an Intel.
    • This reminds me of a story Richard Feynman tells in Surely You're Joking, Mr. Feynman about when he was working at Los Alamos. It was a sensitive project, and there were security flaws; insecure locks, people not locking up their research, a hole in the fence.When he pointed them out, he was generally ignored, and told to get back to work. So he took to pointing them out in funny, difficult to ignore ways. Retrieving people's notes for them when they were at meetings, walking circles around the guards (going out the approved, gated way, coming back in through a hole in the fence), until something was done about them.

      Point is, people responsible for security don't like being told they've made a mistake, and sometimes you've got to make sure they can't just tell you to sit down and shut up, whether in the real or virtual world.

      Companies like MS want to keep security issues out of the public eye because it's cheaper and easier to sell the public on features than it is on security. Their motivation is marketability and sales. So if secure software is important, we've got to make sure security issues have lots of exposure. It's the only way to motivate them.

    • I suggest that you look at this from a different angle. The CIA would equate to MS. If MS finds a vulnerability they don't tell anyone. We hope MS will fix the problem in the next release. I hope the government fixes any vulnerabilities they find (though there have been several reports to the contrary).

      On the other hand, if a reporter discovers some huge security flaw. Should they be allowed to report it? An ethical reporter would notify the agency in charge before publishing. This would give the agency a head start to fix the problem. Just like most people who find security flaws contact the vendor before announcing the bug (unless it's found "in the wild" as in crackers are already using the exploit).

      There are certain cases where a reporter probably should sit on the story, but most likely if a reporter can find out, so can the "bad guys". It's probably far better for the government to fess up and fix the problem (or be aware that the problem exists).

      The former director of the Dept. of Transportation kept trying to get security tightened at airports before eventually resigning over the issue. No one wanted to spend the $$ to increase Airport security. In this case disclosure didn't help.

      There are probably dozens of cases like this. If it's hard to get things fixed when the problem is published, think how hard it is to get them fixed if no one knows.
    • If some joker is sneaking knives and guns onto an airplane, I sure as hell DO want to know about it BEFORE I get onto the plane that the terrorist sneaks the gun or knife onto.

      If the airport security company is not doing it's job, I want to fucking know about it, and I want to know exactly what they're going to do to fix that prior to me ever setting my ass down in an airplane seat again.

      It's about security, which flows from trust, which flows from accountability. Nobody got fired after September 11th. I think that's a big fucking problem. Did anybody at Microsoft get fired after CodeRed? That's also a BIG fucking problem.
    • ... your job is to look after the Nuclear stockpile, if you are a border guard, or if you have to check passports as part of your job.

      As much as you OY YAY FREE SPEECH YAY proponents would like to babble on, it doesn't matter about these other things. It isn't your damn business.

      I run a computer, yes, I need computer security information. But no, I am not a border guard.

  • by Phydoux ( 137697 ) on Tuesday November 13, 2001 @06:06PM (#2560551)
    Everybody seems to like "Full Disclosure," so here at Microsoft, we've decided to begin releasing all security vulnerabilities under a "Shared Disclosure" policy. Once the various NDAs are signed, you too can view and work with any security vulnerabilities that we know about.

    Just another example of how Microsoft listens to and responds to customer requests. Have a nice day!
  • "Culp compares the practice of publishing vulnerabilities to shouting "Fire" in a crowded movie theater. What he forgets is that there actually is a fire, the vulnerabilities exist regardless. Blaming the person who disclosed the vulnerability is like imprisoning the person who first saw the flames."
    • by squidfood ( 149212 ) on Tuesday November 13, 2001 @06:17PM (#2560601)
      When you see a fire in a crowded theatre, you:

      (A) Shout "FIRE!" and get crushed in the panic.
      (B) Walk out quietly...who cares about anyone else?
      (C) Tell your closest neighbor and hope that they're a fireman.
      (D) Pour on gasoline so everyone will get out faster.
    • Of course, if no one looked at the flames they really don't exist anyway.

      However, if you resemble a human being (much like myself) you can't help but watch the pretty flames burn...
    • Why does this have a +5 Insightful? The author just took a quote from the article. He wrote nothing orginal. If SlashCode allowed you to moderate the article, then it should have gotten the +5. This comment should have gotten a -1 Redundant (with the article).
    • The "fire" quote is really taken out of context.

      In the article, the quote serves as reminder that there are times when free speech needs to be curtailed. He is not suggesting it as a metaphor for the entire situation.

      The article is riddled with this sort of straw man fallacy.
    • by kingdon ( 220100 ) on Tuesday November 13, 2001 @07:06PM (#2560769) Homepage

      The argument that you can't just shout "fire" in a crowded theater entered the law in Schenck v. United States [findlaw.com], 249 U.S. 47, 52 (1919). This was a Supreme Court case concerning whether the government may suppress pamphlets encouraging people to resist the draft. Although I think that case may have been correctly decided (with the distinction being expressing opposition to the draft versus encouraging people to violate the draft law), I wonder if the Court realized they were treading on, or near thin ice, when they used the "Fire" analogy.

      So it is with people who use the analogy today. Whenever someone start comparing some kind of speech to shouting "Fire" in a crowded theater, don't get carried away by the emotional appeal but keep an eye on your rights, lest someone try to make off with them.

    • [In response to Microsoft's call for security-through-obscurity. Original is an LWN letter [lwn.net]]

      > By analogy, this isn't a call for
      > people for give up freedom of speech;
      > only that they stop yelling fire in
      > a crowded movie house.

      Another wonderful analogy!

      Security professionals have been yelling "fire" in crowded movie houses for years. Most of the actual patrons fail to pay any attention, despite the fact that the seats are made of explosively flammable materials, the management allows patrons to smoke cigarettes in the theatre, and occasionally the movie is interrupted by ushers dousing patrons with fire hoses if they are noticeably ablaze. Patrons who do catch fire are not offered a refund, nor a credit for those parts of the movie that they miss, nor even so much as an apology.

      --- Zygo Blaxell (zblaxell, feedme.hungrycats.org)

  • Grace Period (Score:5, Interesting)

    by Exmet Paff Daxx ( 535601 ) on Tuesday November 13, 2001 @06:11PM (#2560575) Homepage Journal
    From the powerpoint slide:

    Grace Period
    Purpose: Give users a reasonable interval during which to protect their systems against newly reported vulnerabilities
    - Begins with public notice of vulnerability, and lasts for 30 days
    - Is immediately curtailed if vulnerability becomes actively exploited


    Do I read this correctly? Does this mean that when an exploit is shown to exist in the wild, then they immediately switch to "full disclosure" mode? This means that there is now an incentive to put an exploit in the wild: it means you can publish your work. Even if you leak the exploit surreptitously.

    I know I must be preaching to the choir here, but, this seems exceedingly stupid. Am I missing something?
    • Re:Grace Period (Score:3, Insightful)

      by nebby ( 11637 )
      Well, not really.

      If you're a responsible researcher who discovered the exploit, your work will eventually be published upon the release of a patch.

      The reason, I'd assume, that "full disclosure" mode is enacted upon seeing the exploit be out in the wild is to put some fire under the ass of those responsible to get a patch out. It hightens the level of urgency. I think this makes sense actually, since in most cases a patch will be released during the grace period (theoretically) before the exploit is actually seen in the wild.

      I was actually going to propose a grace period as a "solution" to the problem, before I realized Microsoft was pushing for a grace period. I'm not fond of the month long period though, I'd expect it to be more like a week and a half to two weeks. Having hack-able boxes sitting open for a month when someone out there knows how to get into them is irresponsible. Giving manufaturers two weeks to get themselves together before the script kiddies come full on though seems like a good idea to me.
      • Re:Grace Period (Score:3, Insightful)

        by elmegil ( 12001 )
        Some companies' qualification time takes longer than two weeks. Unless you think unqualified patches are a good idea, giving them time to make the process work is not a bad idea. As it is 30 days is a hard accelleration of most patch qual times.
    • Re:Grace Period (Score:3, Insightful)

      by illusion_2K ( 187951 )

      No. It means that if there is a known exploit in the wild then it is legitimate to post information about the vulnerability that it pertains to.

      Let's say for a second that I'm a network administrator (which I have been) or in a related position. Would I want to know about how someone will be able to break into my network or servers? You bet I would. What if it was possible to avoid being affected by the exploit by changing default settings or shutting down services temporarily? I think whatever inconvience that might cause would be outweighed by keeping my network secure.

      Obviously you haven't had to deal with this sort of stuff before. I'd suggest you do a quick search through the Bugtraq archives [securityfocus.com] for informed discussions on vulnerability disclosure. In the information security world it's a topic which has (almost) been flogged to death.

    • Obviously if your goal is simply to get "First Post" on the exploit, then you aren't going to be concerned with following Culp's security protocols.

      If you do want to follow his plan (which is a good starting point, if not perfect), it's fairly clear what his intent is.

      Your article points out a poor paradox called "False Start". Basically, a runner charged with starting early claims that obviously the race had started - there was already somebody running.
    • Re:Grace Period (Score:3, Interesting)

      by morcheeba ( 260908 )
      Is immediately curtailed if vulnerability becomes actively exploited

      How exactly do they know if the vulnerability has been exploited? A box owner may not realize they've been exploited, and even then may not know the exact exploit used. What are the chances of this information getting back to microsoft before boxes #2-#200,000 are exploited?

      Second, think of the attitude this takes towards customers: They won't give full disclosure until one of their customers is compromised? Sounds like a hostage sitatuion to me.

      And, for the obligitory "if microsoft was a car company" comparison:

      Partial disclosure: "one of the 4 seatbelts in your car can fail. Don't worry, there is a 80% chance that its not the seat you're sitting in."
      Full disclosure: "Don't sit in the rear passanger seat until you get the belt replaced."

      Would you like your car company to say not give full disclosure for 30 days or until someone died?
  • Oh, does this mean the software vendors will establish some *real* Quality Assurance in their development process and produce software without bugs?? :*)

    blurring out...
  • by JMZero ( 449047 ) on Tuesday November 13, 2001 @06:29PM (#2560666) Homepage
    Culp makes a lot more sense than he's given credit for, and a lot of his points have been taken out of context. The procedure he outlines seems very reasonable to me:

    "Most of the security community already follows common-sense rules that ensure that security vulnerabilities are handled appropriately. When they find a security vulnerability, they inform the vendor and work with it while the patch is being developed. When the patch is complete, they publish information discussing what products are affected by the vulnerability, what the effect of the vulnerability is... and what users can do to protect their systems....

    "Some security professionals go the extra mile and develop tools that assist users in diagnosing their systems and determining whether they are affected by a particular vulnerability. This too can be done responsibly...
    • The key difference between what Culp suggests and the right way is that with Culp's approach there is no real incentive for the vendor.

      The responsible way to release vulnerability info is to warn the vendor first, letting them know that in a week or so the advisory will be made public. That way the vendor is forced to act. Scott Culp left out the part about the time limit.

      • I believe Culp did suggest a time limit, although I did not include it in the quote. I believe he suggested 30 days. I don't know whether that is an appropriate time frame - but the thought is there.

        I believe the real problem is getting the incentive to the admins. In the case of Code Red, the patches were available for a long time, but admins didn't pick them up.

        I believe that MS is currently sufficiently motivated (though I don't know how well they'll be able to patch the dam).

        The biggest problem is the chain of command. Currently the only one that really works is-

        1. Exploit -> CNN -> PHB -> Bad techie

        and that needs to change.
        • As far as getting admins to actually patch things, I think that's best left up to the market. If a company repeatedly suffers because their admins aren't patching their machines properly, then maybe they should get new admins.

    • If what Culp's points have been taken out of context than its noones fault but his own. After all, the first paragraph reads "Code Red. Lion. Sadmind. Ramen. Nimda......And we in the security community gave it to them.". CodeRed and Nimda are, in my opinion, great examples of how open, full disclosure has worked. I would hate to know what CodeRed could of been, had eEye not published the vulnerability and no signatures were created to detect the exploit of the vulnerability prior to the worms release.

      The process that needs to be fixed here is getting admins/users to implement patches immediately shrinking the "Window of Exposure". MS's fear of bad PR seems to outweigh its concerns about the security of its clients.
    • *bzzt* wrong.

      What sensible security researchers do is warn the vendor in advance, then wait a "reasonable" time for the vendor to answer. What "reasonable" is up to the researcher, and generally depends on how big the hole is, how likely it is an exploit to be already in the hands of script kiddies, etc.

      If the vendor doesn't answer timely (at least a non-automated "gotcha, we're checking this out") then it's disclosure. I'd say that here "timely" is pretty short - a few days at most. After this stage, usually there is a time for fixing the hole, or at least providing a work-around until a patch can be released. This phase can last (empyrical evidence from reading BugTraq) from a few days to a few weeks. Then either the vendor prepares an announcement, or the researcher does.

      This is not perfect, sometimes mails get lost, or external pressure gets the better of good judgement, or whatever else. However, this manner of acting gets everybody time to understand what's happening while keeping the "vulnerability window" as tight as possible.

      What is different from Culp's statement? That the researchers and not only the vendors get to decide what "appropriate response time" is, so critical knowledge doesn't get stranded in somebody's mailbox until marketing says otherwise.

      About releasing proof of concept code responsibly: either such code works or it doesn't. Some professionals deliberately put a couple of syntax errors in their exploits, so that a completely clueless script kiddie can't just fetch them and use them. However, it only takes one clueful script kiddie to release a working version of the exploits. Unfortunately in this particular case it's either black or white, I see no chance for greys.
  • by bwt ( 68845 )
    In his essay, Culp compares the practice of publishing vulnerabilities to shouting "Fire" in a crowded movie theater. What he forgets is that there actually is a fire, the vulnerabilities exist regardless.

    Slam.
  • ...is starting the widespread debate on issues that many people need to consider.

    Computer/network/internet security issues have been around a long time; perhaps now it will be more of a factor in management decision making.
  • by GISboy ( 533907 )
    vendors didn't have any motivation to fix vulnerabilities. CERT wouldn't publish until there was a fix, so there was no urgency. It was easier to keep the vulnerabilities secret. There were incidents of vendors threatening researchers if they made their findings public, and smear campaigns against researchers who announced the existence of vulnerabilities (even if they omitted details). And so many vulnerabilities remained unfixed for years.


    Perhaps it was pointed out that codered et al had patches a month ahead of time.
    But, in the same breath/stroke it was mentioned by MS that their meathod of informing, distributing about patches/vulnerability was/is "confusing".
    And the article by Culp almost says in effect "we don't want vulnerabilities known so we can stop writing patches and bugfixes or do it when "we" feel like it".

    The whole "rely solely on the vendor" schtick is coming full circle it seems.

    The author pointed out that is the way "it used to be" and it seems Microsoft is pushing for it to be that way again.
  • 1. Discover the vulnerability.
    2. Write code to exploit the vulnerability.
    3. Arrange with an industry journalist to demonstrate the exploit.

    Then it comes down to MS PR vs. journalistic integrity.

    P.S. Don't even THINK about doing this unless you're cool with MS buying all the trade rags...
  • by Anonymous Coward
    Anybody seen this?
    http://www.microsoft.com/technet/treeview/defaul t. asp?url=/technet/security/bulletin/MS01-055.asp


    Frequently asked questions

    Why isn?t there a patch available for this issue?

    The person who discovered this vulnerability has chosen to handle it irresponsibly , and has deliberately made this issue public only a few days after reporting it to Microsoft. It is simply not possible to build, test and release a patch within this timeframe and still meet reasonable quality standards.
  • by carambola5 ( 456983 ) on Tuesday November 13, 2001 @06:57PM (#2560740) Homepage
    Anyone else notice the peculiarity of the list at the beginning of Culp @ Microsoft [microsoft.com]? Let's see....
    • Code RedMicrosoft worm.
    • LionLinux worm
    • SadmindSolaris worm that affected Microsoft OS's (*ack* if you can call them OS's!)
    • RamenLinux worm
    • NimdaMicrosoft worm
    Now that means that a "representative" list of worms would contain 50% Microsoft worms, 40% Linux worms, and 10% Solaris worms. It's good to see Microsoft presenting a legitimate picture of what's going on. C'mon!! Windows practically breeds worms! Linux has had how many? 4, 5? Morris, Ramen, Lion, Adore. That's all I can come up with. Now, do I start listing the Microsoft worms (not to mention virii)?...
    -------------
    All your sig are belong to us.
  • Culp has a point when he talks about responsibility. (Ironically, of course, Scott is avoiding "mea Culpa.")

    Ouch...

    and referring to the Culp article again, with the DMCA in effect, it is a lot easier "to shut ppl up about MS's vulnerabilities than it is to fix them.

    OOOoooo...that really hits home.
  • Regardless (Score:2, Insightful)

    by The Bungi ( 221687 )
    Bruce's statement along the lines of I don't blame the sys admins for this. There are too many patches... is interesting.

    While it is certainly up to the vendor to release as bug free code as possible, I disagree with his exoneration here. "If you don't know how to use it, don't" holds true regardless of what OS we're talking about. A Unix sysadmin that doesn't patch his/her boxe(s) is as much to blame as an MS sysadmin who fails to do so as well.

    Whether or not the amount of exploits for IIS are a direct result of how widely it is used outside of the "heavy metal" internet server arena is anybody's guess. But to even suggest that the sysadmins should say "oh, fuck it. It's the vendor's fault" is a bit like putting one's network in the hands of God... maybe it will be OK, and most likely it won't.

    • Re:Regardless (Score:5, Informative)

      by rodgerd ( 402 ) on Tuesday November 13, 2001 @10:34PM (#2561401) Homepage
      You sound suspiciously like someone who doesn't have sufficient experience in the NT world.

      Windows patches and hotfixes are a whole world of pain. SP2 for NT4 erased filesystems. SP6 crippled people running Notes. Hotfixes regularly blow each other away. They're a *mess*, and a good Windows admin will be *very* cautious about applying either hotfixes or service packs for NT/W2K/XP because the QA on them seems to be so low, so often.
      • I have installed Hotfixes and Service Packs for years, and have never had any problems with any of them. A recent consulting job of mine entailed creating scripts to install 8 Hotfixes back to back (pre Service Pack 3 for Windows 2000, which was not released yet) and not a single one wrecked the system.

        This flies in stark contridiction with my experiences playing with the kernel in Linux, where a simple errant pointer can wreck an entire Make. There is some benefit to having the source code available; but in this particular instance, less may actually be more for those who, like myself, don't want to have to check hundreds of lines of codes to fix an LPR vulnerability.

        That's not to say NT's Hotfixes are foolproof, but there is a reason Microsoft has finally put the automatic update feature into place with Windows XP. They are confident enough that people won't be turning on their systems one day and having them crash due to an update being installed overnight. And from my experience, this hasn't yet occured in Windows XP.

        • This flies in stark contridiction with my experiences playing with the kernel in Linux, where a simple errant pointer can wreck an entire Make.

          Yeah, ruining a *whole make*. That's awful. Just as bad as hosing entire filesystems.

          That's not to say NT's Hotfixes are foolproof

          And it's a good thing you didn't, too, since one of the reasons NIMDA caught some people unawares was a case where IIS would keep switching indexing server, and hence vulnerabilty, back on under certain circumstances with software updates.

          Are 2K fixes, in general, better than NT4? Sure. Are XP ones better? Who knows, it's hardly had time on the market for problems to occur. But they're still a mile away from the Unix/BSD/Linux world (although it appears Apple are going to drag the rep of the BSD world down...).

          But quite frankly, anyone who auto updates their server, of any class, is a fucking moron.

  • is not about shouting "fire" in a crowded room.

    It is about lighting a "fire" under a vendors ass.

    Perhaps so Culp does not forget this point he should take the advice in another story and "tatoo it on his butt" if he needs to.

    And not in invisible ink, btw.
  • Meaning:
    It Isn't Secure.

    How apropos.
  • This seems to me to kind of parallel biology. In an environment where exploits are not discussed, there is a smaller penalty for buggy software. With increased discussion, the software that remains will be the software that is more secure, or that evolves to be made more secure.

    So how does Microsoft survive? Is it a virus?
  • Maybe I am missing something here but in every other industry where there is a flawed product that can cause potential damage, full disclosure is expected.
    For example the auto-industry. If you buy a new/used car and it is a lemon or has massive faults that can cause serious damage the vendor is expected to state those faults [ftc.gov]
    I have two children and ANYTIME there is even the slightest risk of problems with the products we have bought for them, the vendor says don't use it any more.

    You would think that Microsoft would have learned from Firestone/Ford....

    • If you are going to use the automobile argument, avoid falling into the common trap. Software just about all comes with disclaimers in all capital letters saying not to use it in any system where failure could cause injury or death. Anyone stupid enough to run windows in a nuclear control or life support deserves to get sued anyway, but my point is, the things you point to in real life products are flaws that affect the physical well being of the user, i.e. injury or death. No one is going to get hurt if you lose all your data, not in a physical way.

      Unless you were the person responsible for security and the PHB goes for blood. :)
  • by shimmin ( 469139 ) on Tuesday November 13, 2001 @08:21PM (#2561074) Journal
    Bruce makes a good point regarding software liability laws, or rather the lack thereof.

    Almost every piece of commercial software you install these days has something in the license like (taken from the Red Hat legalese):

    "There is no warantee for the program, to the extent permitted by applicable law. Except when otherwise stated in writing by the copyright holders and/or other parties provide the program "as is" without warranty of any kind, either expressed or implied, including, but not limited to, the implied warantees of merchantability and fitness for a particular purpose. The entire risk of as to the quality and performance of the program is with you. Should the program prove defective, you assume the cost of all necessary servicing, repair, or correction."

    Now someone explain to me why, when software vendors disavow all responsibility for their products, they should be granted some special status with regards to information about those products' misbehavior.

  • When software vendors become liable for data loss, and the associated costs, then they have a very strong financial incentive to fix bugs.

    In the current model, even with full disclosure, the most they risk is sales loss due to bad PR, and to modernize the old saw, "nobody ever got fired for buying Microsoft".
  • Word will eventually get out -- the Window of Exposure will grow -- but you have no control, or knowledge, of when or how.

    It's not just what he says; it's how he says it. For some reason, the above sentence makes me think of a particular vendor.

  • Now, I know I am opening myself to people making fun of my name, and over the years, many have done so. But, it is just too easy...

    Since Mr. Culp is Microsoft's appoligist, might his title at MS be Mea, that would make his full title ther Mea Culpa?

    Or, since they have found MS guilty of being a Monopoly, would that make this person in charge of culpablity for MS?

    ttyl
    Farrell (running, ducking and hinding...)
  • Can someone explain the benifits of "Full Disclosure" in a closed-source scenario such as bugs in IIS in Windows?

    I'm not interested in arguments about open-source systems, or how vendors should be liable for bugs, etc...

    I simply want to know why it makes sense to publicise the code for a vulnerability as opposed to saying "there a bug in this area, we're working on a patch". What are the benifits?

    I wonder: should we send Osama Bin Laden precise instructions for making Anthrax, Small-Pox, or Nuclear Weapons?

  • > Since full disclosure has become the norm, the computer industry has transformed itself from a group of companies that ignores security and belittles vulnerabilities into one that fixes vulnerabilities as quickly as possible. A few companies are even going further, and taking security seriously enough to attempt to build quality software from the beginning: to fix vulnerabilities before the product is released.

    And Microsoft doesn't like fixing problems, let alone building quality in from the start. Those activities don't add anything to their bottom line; it's a waste of resources.

    Microsoft doesn't like the new norm, therefore it doesn't like full disclosure. (Where's the surprise?)

    To say nothing of the bad PR that hits the world's presses twice a week when the latest MS-specific exploit shows up at the disclosure site.
  • There's a basic assumption in this discussion that the threat is script kiddies. It's not. They're the visible and annoying part of the problem, but not the part that causes real losses.

    The real threat is someone who goes looking for security holes, finds them, and quietly uses them to steal information or money. It's the people who are stealing credit card numbers, bank account info, and military information that are threats. Serious attackers will often work to obtain inside information, and may be willing to combine physical attacks with computer attacks.

    Vulnerabilities left open but not publicized open doors for the real attackers. Non-disclosure shuts down only the more inept script kiddies.

  • Bruce Schneier is giving a talk entitled [faircopyright.org]
    "The Natural Laws of Digital Content" on November 15 at 7:00 at the University of Minnesota Minneapolis campus.

    The subject of the talk is related to the topic of this story - how legislation such as DMCA interact with computer security issues. So if you're interested in this topic and live near Minneapolis click the link above to find out details about this talk.

    Also, we [faircopyright.org] hope to tape Bruce's talk and put up video and audio of the talk on our web site at a later date.

  • Imagine a world in which software companies are criminally and/or civilly liable for ill effects resulting from successful attacks on their products.

    I think that in such a world, software quality would improve dramatically, and software manufacturers would be at least as motivated to fix bugs as they are in a world with full disclosure.
  • From Culp's piece at http://www.microsoft.com/technet/treeview/default. asp?url=/technet/columns/security/noarch.asp:

    "Providing a recipe for exploiting a vulnerability doesn?t aid administrators in protecting their networks. In the vast majority of cases, the only way to protect against a security vulnerability is to apply a fix that changes the system behavior and eliminates the vulnerability; in other cases, systems can be protected through administrative procedures. But regardless of whether the remediation takes the form of a patch or a workaround, an administrator doesn't need to know how a vulnerability works in order to understand how to protect against it, any more than a person needs to know how to cause a headache in order to take an aspirin."

    This is Microsoft's opinion in a nutshell: Don't worry about the details, we'll take care of you. That doesn't surprise me for end-users, but for administrators? When I see a bug announcement with a detailed example, such as the ftp_conntrack bug in iptables, it is tremendously advantageous to actually understand the bug and how to deal with it. In that case, several workarounds suggested themselves, because the bug only afected RELATED connections.

    Now take the MS paradigm: I wait until they release a patch, or detailed instructions which I should follow by rote. Of course, I am affected by the vulnerability longer; furthermore, I get no transferable knowledge from the experience. Next time there's a similar bug, I just have to wait, again, instead of being able to invent a workaround.

    Sure, it's _possible_ to implement a workaround when I don't understand the vulnerability, but I sure feel a lot better when I understand the problem AND the solution. I simply don't understand how this MS scheme (where everyone is an unenlightened end-user, waiting for cryptically-named patches which they don't understand) could appeal to any business OR home user. By assuming that even its administrators are unqualified to do manual reconfiguration by themselves, or even really understand what they're doing with the OS, MS has effectively crippled their fleet of administrators. And this, ultimately, is why the NT(2k/xp, whatever)platform is the huge, gaping security hole it is.

    I simply can't believe the arrogance and stupidity of the statement above.

    "...an administrator doesn't need to know how a vulnerability works in order to understand how to protect against it, any more than a person needs to know how to cause a headache in order to take an aspirin."

    I think that speaks for itself.

The optimum committee has no members. -- Norman Augustine

Working...