Become a fan of Slashdot on Facebook


Forgot your password?

Slammer Worm Slams Microsofts Own 528

MondoMor writes "Microsoft's forgot to patch some of its own servers to protect it from the months-old vulnerability exploited by the Slammer Worm, reports C|Net. Oops. Apparently Redmond's network was hit pretty hard. Just goes to show that no matter who you are, you'd better keep your apps patched." Update: 01/29 01:59 GMT by T : And if you're running systems which might be affected, take note: whitehorse writes "The Microsoft KB article for the Slammer patch found here has an incorrect URL for 'Download the patch' referring to KB Q316333 which is only a handle leak fix. The real patch may be found later in the article."
This discussion has been archived. No new comments can be posted.

Slammer Worm Slams Microsofts Own

Comments Filter:
  • SQL Server (Score:5, Interesting)

    by pdbogen ( 596723 ) <> on Tuesday January 28, 2003 @12:31PM (#5174981) Homepage
    At my office, we weren't vunerable because we /didn't/ upgrade. We were still running SQL 7.. Just goes to show you...
    • Re:SQL Server (Score:5, Interesting)

      by B1 ( 86803 ) on Tuesday January 28, 2003 @12:40PM (#5175075)
      It's funny. I think a while back, there was an article posted about security through obsolescence.

      Basically, the idea is that by running "ancient" versions of software products, the script kiddies are completely thrown for a loop--their collections of 'sploits only work on more recent versions of code.

      Not that I advocate it, of course, but you made me think about it.
      • Re:SQL Server (Score:4, Insightful)

        by jsse ( 254124 ) on Tuesday January 28, 2003 @01:24PM (#5175402) Homepage Journal
        Basically, the idea is that by running "ancient" versions of software products, the script kiddies are completely thrown for a loop--their collections of 'sploits only work on more recent versions of code.

        It doesn't work, at least not for Microsoft's products. You and grandparent post forgot the Microsoft Support Life Cycle [], say Windows 98 and NT 4.x will be entering "Non-supported phase" after June this year, Windows 2K even earlier, March.

        Granted, SQL server 7.0 is still under the coverage of normal support til March, 2004, and if you happened to be a premium customer, they the period can be extended to 2006.

        However, do not forget when a product is desupported, Microsoft will not take care of new problem found in it. No service patch, no enquiry. No MS reseller would dare take up the maintenance. They'd only offer you one option thereafter: upgrade.

        Keep using the desupported products? Sure you can, but can you bet your career on a desupported product? You're welcome to do so as they can have a convenient target to blame when shit happens. :)
  • by pulse2600 ( 625694 ) on Tuesday January 28, 2003 @12:32PM (#5174985)
    I am so happy Microsoft got a taste of the problems that their own buggy software has...I wonder how many times this will have to happen to them until they get the picture.

    "That vulnerability is completely theor...oh shit!"
    • by burgburgburg ( 574866 ) <> on Tuesday January 28, 2003 @01:20PM (#5175368)
      Microsoft always claims that it is the endusers responsibility to implement patches once they're released. The fact that six months later, they hadn't done so themselves would seem to indicate that this is in fact a sham argument put out to distract from their responsibility. And the fact that past patches have consistently had such a destructive effect on systems would provide further proof.

      They release fixes that people have been so conditioned to avoid that they even do so themselves. It hardly seems to be a fix if nobody will touch it with a ten foot pole.

      • by mmol_6453 ( 231450 ) <short@circuit.mail@grnet@com> on Tuesday January 28, 2003 @02:21PM (#5175830) Homepage Journal
        Would you rather have a system where you have to manually implement every patch, or would you rather have a system where you didn't have any choices which patches were implemented?

        The first choice would lead to a lot more work. The second choice would have automatically installed .NET and WMP 9 on your computer. The second choice would also automatically sign you on to whatever contrac--er...license agreements that came with the patches.

        Power is like entropy. It always seeks to increase.
        • by Daniel Phillips ( 238627 ) on Tuesday January 28, 2003 @03:57PM (#5176480)
          Would you rather have a system where you have to manually implement every patch, or would you rather have a system where you didn't have any choices which patches were implemented?

          That argument is an example of a logical fallacy called "bifurcation" - presenting two alternatives as if they were the only two alternatives, when in fact more may exist.

          Somehow I keep my Debian system updated with the latest security patches without much effort, and without being forced to accept patches I don't want.
    • I am so happy Microsoft got a taste of the problems that their own buggy software has...I wonder how many times this will have to happen to them until they get the picture.

      You don't suppose this will convince them to finally switch to OSS, do you? I haven't seen my MySQL boxes taking down the Internet lately!

      (Ok, ok, that was low.. ;) )

  • Zoiks! (Score:5, Insightful)

    by Anonymous Coward on Tuesday January 28, 2003 @12:32PM (#5174987)
    Relying on a vendors automatic update feature is no substitute for solid system administration.
    • Re:Zoiks! (Score:5, Informative)

      by Anonymous Coward on Tuesday January 28, 2003 @12:38PM (#5175054)
      automatic update doesn't work with SQL server. you have to do patches the "old" way (unzipping files, renaming files, prayer), which is probably why so many novice admins never applied the patches.
      • Re:Zoiks! (Score:5, Informative)

        by questionlp ( 58365 ) on Tuesday January 28, 2003 @12:49PM (#5175146) Homepage
        Please mod this parent up.

        There isn't only no way to get SQL Server patches from Windows Update, but (as the parent mentioned), the steps required to update SQL Server and the Desktop Engine (MSDE) is a royal bitch and some.

        For example, to apply any hotfixes or cumulative patches for SQL Server 2000, you must download the package, extract it, backup the SQL Server install directory and databases, manually copy over DLL files and other updated binaries, execute the SQL query files included in the patch (one at a time, in a certain order... MSDE users need to use the command line interface for it since there is no GUI provided), then pray that everything is okay and start SQL back up.

        • Re:Zoiks! (Score:4, Insightful)

          by Silvers ( 196372 ) on Tuesday January 28, 2003 @01:20PM (#5175377)
          There's no excuse. Just because it is harder to install than a simple windows update package isn't any kind of reason not to update. What are you doing having a SQL server out in the wild unprotected with a *known* exploit?
          • Gadzooks! (Score:5, Insightful)

            by ( 184378 ) on Tuesday January 28, 2003 @03:38PM (#5176366) Journal
            There's no excuse. Just because it is harder to install than a simple windows update package isn't any kind of reason not to update.

            I agree, however...

            Microsoft has argued for a long time that Windows is easier to administer (than UNIX/Linux), and that you don't need to hire an expensive, trained admin (which I assume they are referring to UNIX admins, but aren't MCSE expensive, trained admins, all jokes about the quality of MCSEs aside?).

            So here we are with MS SQL Server, which is supposed to be an enterprise quality database system... but it has no intuitive interface for installing patches. So either we have a real DBA, who should know how to do these patches, or we have a power user to manage the database through a better interface to keep up to date on patches.

            Either it's easy and you don't need an admin, or it's difficult and you do need a trained admin. SQL Server updates can't be as "complex" as they currently are if Microsoft is going to claim that anyone can admin a Microsoft server product.

            Granted, they may not be making the claim that SQL Server is easy to administer, but what are the customers going to think? If Windows is "easy" (or so says the advertising), then SQL Server must be easy too! They both have little wizards to automate tasks, they both have a graphic interface for management...
    • Re:Zoiks! (Score:5, Insightful)

      by stinky wizzleteats ( 552063 ) on Tuesday January 28, 2003 @01:06PM (#5175286) Homepage Journal

      Relying on a vendors automatic update feature is no substitute for solid system administration.

      Solid system administration is no substitute for solid systems.

    • Re:Zoiks! (Score:3, Interesting)

      by haeger ( 85819 )
      Another thing worth mentioning is that some people probably patched their system with sp3, which I believe was supposed to fix this problem, but then applied some other patches that broke sp3 again.

      I heard this from our Windows admins at work as an explaination as to why we were hit.

      And as someone mentioned below, it just takes one person with a laptop or a poorly configured firewall somewhere in the organisation to get hit.

      Still, it's funny as hell that MS got shafted. Especially as they say that "If You just keep your system patched, its no problem. We can't be held responsible for what You don't do."


  • The Irony (Score:5, Interesting)

    by Merlin_1102 ( 594400 ) on Tuesday January 28, 2003 @12:33PM (#5174993)
    Oh the irony in this. Microsoft always insists you update your patches, but for some reason they don't. O well this could be a good thing for network administrators as at the end it stated they were going to work on a new way to install patches.. Or thats what it looked like they said to me.
    • Re:The Irony (Score:3, Interesting)

      Sorry. It's just a little funny.

      Second, I was just thinking about how inefficient using a web site to update their products is. With XML-RPC and SOAP available, they could at least make a client-side app that optionally does this. Yes, XP has it. Why not make it available for all their apps?

      Or is it, and I'm just in the dark?
    • Re:The Irony (Score:5, Interesting)

      by BeeShoo ( 42280 ) on Tuesday January 28, 2003 @01:07PM (#5175291)
      It wasn't neccessarily through neglect that servers weren't patched (not just at MS, but everywhere)
      MS patches/service packs have a nasty habit of breaking applications, ESPECIALLY the SQL Server updates. Whenever they release another SQL patch, it takes us a very long time to approve it for use, and it almost always involves some recoding on our part. Repeat this process 20 times a year and it gets damned near impossible.
      • Re:The Irony (Score:5, Interesting)

        by bob670 ( 645306 ) on Tuesday January 28, 2003 @01:39PM (#5175511)
        You are correct, we use a third party payroll system on a SQL 2000 server. Every patch so far has broken some part of the payroll system, and those same execs screaming for security scream even louder when paychecks don't get cut.

        I have come to dread every MS patch with a certain sense of dread. At least on the desktop you can build an image and test it with no real risk, but on production servers it's a total gamble, and I'm tired of bettig my ass (and personal life, and sleep, and job title) on Microsoft. Our SQL box is behind a firewall and no other SQL (developer or otherwise) runs in house, so I took a pass on this patch until the guys that code the payroll system have approved it. That might sound great until you know they are 3 guys who support 5 products (with multiple versions) and it takes them months to test anything.

        I'm quite glad MS gets bit by their own bugs, now that's good karma.
        • Re:The Irony (Score:3, Interesting)

          by indiigo ( 121714 )
          Just as a counter-argument, we've been running SQL2000 for a year now with four distinct databases, patched on weekends as they came out, and not had a single issue, performance, security, patch, or backup/restore. Total administration time I would say over the past year is about 10 hours, total, with patching and updates, backup/maintenance. Rock solid. Not an MS employee or pundit, we run Linux as a firewall/IDS/Squid and are moving many services over as I write, but SQL 2000 is a fairly good product comparitively.
  • Big Surprise? (Score:4, Insightful)

    by Dr Caleb ( 121505 ) on Tuesday January 28, 2003 @12:33PM (#5175001) Homepage Journal
    Why does it suprise anyone that Microsoft has bad admins, the same as anyone else. Bad admins are bad admins, no matter which company they work for.

    I'm glad to say that my servers were unaffected. Slapper does not affect AS/400 nor Linux.

    • Re:Big Surprise? (Score:5, Insightful)

      by ajs ( 35943 ) <ajs.ajs@com> on Tuesday January 28, 2003 @12:56PM (#5175201) Homepage Journal
      It was likely not "bad admins" so much as bueracracy. Most large companies make it very hard to make any kind of change, which leads to a situation where only the scariest, hairiest bugs get patched. This one may simply have seemed too complex for the average person to exploit until it was too late.

      This problem is actually a very interesting one that I've been looking at for years. It happens in everything from 300-person companies to giant mega-corps. It's not because people are stupid, but because large systems only can only avoid tripping on themselves by imposing arbitrary controls.

      I think that the right solution is staged anarchy, which is sort of what many large companies (e.g. Microsoft, AT&T, IBM, etc) do with their research divisions or via acquisitions or both. The idea is that you let smart people go nuts and create the unsupportable. You then get more, but different smart people to turn THAT into the supportable. You then get more average corportate drones to convert the supportable into the existing production framework. You then present the existing production framework to the first group of smart people and let them start over again.

      You get about a 6-month cycle if you do it right, and you keep reaping the benefits of wild-eyed hacking as well as stability.

      Microsoft takes a lot of flack for their technology, but they do this one thing well. You may not like such things as NT, C#, etc, but they are fairly large and complex beasts that most companies would not be capable of cranking out on their own (hence the benefits of open source development so that they don't have to). MS was able to draw on (and some would say corrupt) the smart work of their research folks and of technologies that they acquired and "MS all over it" until it fit their sales and support model, which is one of the reasons that they could do something like go from "Internet-illiterate" to winning the browser war, practically overnight.

      IBM does this quite a lot as well (all of their hard drive advances come from this sort of process).

      Interesting stuff.
      • Re:Big Surprise? (Score:3, Informative)

        MS was able to draw on (and some would say corrupt) the smart work of their research folks and of technologies that they acquired and "MS all over it" until it fit their sales and support model, which is one of the reasons that they could do something like go from "Internet-illiterate" to winning the browser war, practically overnight.

        From about Internet Explorer: "Based on NCSA Mosaic. NCSA Mosaic(TM); was developed at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign." Microsofts "smart work" was assimilating open-sourced non-GPL code to become internet literate. Explains why they dislike the GPL. It puts a damper on their research and innovation.

        • Re:Big Surprise? (Score:3, Insightful)

          by ajs ( 35943 )
          Exactly. As I said, MS is very good at this sort of "acquire good technology -> productize -> sell" model. It's not something that a lot of companies can do well, and if you've ever seen it done badly, you'll begin to get a sense for how hard it was for MS to do this.
        • Re:Big Surprise? (Score:3, Insightful)

          by TrentC ( 11023 )
          Explains why they dislike the GPL. It puts a damper on their research and innovation.

          No, it puts a damper on their ability to exploit the freely-offered code and sell it back to people.

          You can innovate on GPL'ed code, you just can't keep your innovations to yourself.

          Jay (=
    • Re:Big Surprise? (Score:3, Interesting)

      by marko123 ( 131635 )
      If the parent wasn't modded insightful, I'd call it a troll :)

      Anyway, I think most people who work with MS patches know they are a trade off between patching the latest holes and breaking something/everything. The only way you can ensure a fully functional application running on an MS OS/DB/web/ActiveX etc. is to baseline the production environment after the application is released. For their activation interface, that would mean not wanting to take the risk of patching once the product is released. That's the price of uptime. Hope they get it now. I bet the admins weren't allowed to patch.
    • Re:Big Surprise? (Score:3, Insightful)

      by malfunct ( 120790 )
      From the sounds of it the problem is the boxes that got hit weren't run by admins. It sounds like all the developers boxes with SQL on them were unpatched.
    • Re:Big Surprise? (Score:5, Interesting)

      by belloc ( 37430 ) <> on Tuesday January 28, 2003 @01:47PM (#5175583) Homepage
      Why does it suprise anyone that Microsoft has bad admins, the same as anyone else.

      Well, the article says that the affected systems were mostly individuals' workstations running SQL server (presumably developers running SQL to simulate a production environment). So these weren't production servers that were affected. Once Slammer got onto the network via the workstations, junk traffic just overwhelmed the routers.

      I can't imagine the system/network admins having so much control over developers workstations that they would be responsible for applying patches to SQL servers on those systems as well, especially at a monster software company where just about everyone probably has mini-production test environments right on their workstations. It seems like developers should be responsible for those themselves.

      Of course, you have to ask how the thing got in the door in the first place. SOMEBODY that was running an unpatched SQL server must have had port 1434 open to the internet, right? And that WOULD be the admins' responsibility.

      • Re:Big Surprise? (Score:5, Interesting)

        by AndroidCat ( 229562 ) on Tuesday January 28, 2003 @03:44PM (#5176394) Homepage
        SOMEBODY that was running an unpatched SQL server must have had port 1434 open to the internet, right? And that WOULD be the admins' responsibility.

        It should be blocked at the firewall, but it's possible that the suits ordered the port open so they could access corporate data on the road, and didn't want to learn any of the secure ways to do it. And this exposed developer machines, which aren't as rigourously configured.

  • 10 bucks... (Score:5, Funny)

    by PFactor ( 135319 ) on Tuesday January 28, 2003 @12:34PM (#5175009) Journal
    ...says that patch management in Microsoft operating systems gets 100% better in 1 year :P
  • patch? (Score:3, Funny)

    by bilbobuggins ( 535860 ) <.bilbobuggins. .at.> on Tuesday January 28, 2003 @12:34PM (#5175011)
    I was going to say:

    Just goes to show that no matter who you are, you shouldn't use MS SQL.

    but hey, to each their own...

  • by ruiner13 ( 527499 ) on Tuesday January 28, 2003 @12:34PM (#5175012) Homepage
    As one of the articles I read on the issue stated, it really does show that their policy of blaming the users for not patching their systems perhaps isn't the best approach to take. It is in fact blaming the victim for the software's flaws. Maybe this will turn microsoft more towards making sure their products are more secure from the start if this info gets around enough. Yes, I know Billg's "Trusted Computing" plan is rather new, but they sure seem to get caught with their pants down often.
    • by municio ( 465355 ) on Tuesday January 28, 2003 @12:55PM (#5175196)
      if this info gets around enough

      I don't think so. I watched a 4 min report on the Slammer Worm in CNN on saturday and they fail to mention either MS or SQL Server. It was an "internet worm", originated by some haker in the internet for the internet. For 4 min they danced around the news without any mention of Redmond or any of their products.

    • It is in fact blaming the victim for the software's flaws.

      Yep. The same can be said for clicking on virus laden emails. Back when the "I love you" email virus was making the rounds, some MSCE type sent out an email scolding people for clicking on bad emails at the comapny where my wife worked. The next day, her inbox had 50 some emails from him where he'd clicked on a bad email.

      Later the same week, our IT dept deployed the last anti-virus patch. I set it off looking at comments on Slashdot where somebody had posted the Word Basic macro that was doing all the dirty work. The dern virus scanner was keying off the macro source. That caused a bru-ha-ha.

  • by instantkarma1 ( 234104 ) on Tuesday January 28, 2003 @12:35PM (#5175019)
    Larry Ellison is cackling like a little girl........
  • MS Tech guy (Score:5, Funny)

    by objekt ( 232270 ) on Tuesday January 28, 2003 @12:35PM (#5175020) Homepage
    (found on another forum) 01/25/2003 1:04:37 PM

    "MSN was total messed up, I couldn't even log on to the net last night it said that my user name and passworded was invalid so I call them up and the tech guy says wow that's weird I can't ether."
  • by wbm6k ( 593413 ) on Tuesday January 28, 2003 @12:36PM (#5175025)
    The article I read (on yahoo []) states the unpatched servers were all on the internal network, not the internet, and that they were in use by researchers within microsoft.

    Let's not jump too quickly on the bash microsoft bandwagon for that. (Of course, if they just did enough testing and didn't release buggy, vulnerable software in the first place...)
    • by jrumney ( 197329 ) on Tuesday January 28, 2003 @12:46PM (#5175112)
      OK, so how did these servers get infected in the first place, if they weren't on the internet?

      Was the Slapper worm developed by a disgruntled Microsoft employee, and unleashed from within Microsoft?

    • I agree, I am sure MS had policies in place to keep all public-facing servers fairly up2date. One thing that I found to be true is when the article mentioned that alot of the developers internally had installed SQL or MSDE on their workstations. I know that when our comapny got Code Red / Nimda, it was the developers workstations with IIS that were propagating it to the rest of the network.

      Just goes to show that people who are paid to be technically apt can be just as much of a crutch and regular users.
    • by Anonymous Coward on Tuesday January 28, 2003 @01:05PM (#5175270)
      There are quite a few "porous" holes that get into Microsofts internal networks. None of them are direct and without something like this worm that uses their own software, none are likely to allow much in.

      I've worked in some of the Microsoft data centers and done design work... I know how hard they (just like many of my other non-microsoft customer) try to keep people "out" of these networks. But I've seen development projects go on the "soft" network and then get forgotten about. Its machines like these that probably provided the bridge back into MS.

      It happens. Regardless of the company. Just some get more publicity than others. You think BofA didn't have firewalls? And yet they went offline for what... half a day or more?
  • by calethix ( 537786 ) on Tuesday January 28, 2003 @12:37PM (#5175038) Homepage
    Several MS sys admins are now looking for a new job.
  • by tuxlove ( 316502 ) on Tuesday January 28, 2003 @12:38PM (#5175045)
    This story supposes that Microsoft should somehow be a paragon of network infrastructure. It's clear from past events that MS is among the lamer of companies when it comes to infrastructure/security. Take, for example, the time DNS for just about the entire collection of MS domains, such as and, were completely disabled by an attacker. They had all four of their nameservers on the same subnet, and all running Microsoft DNS software. An easy target to say the least. Calling this sophomoric is being kind. It didn't take them long to fix it, and I believe that now they contract out their DNS to get maximum diversity (and they even utilize Unix nameservers!).

    I fully expect to see more entertaining stories like this for a long time to come.
  • by Dan Guisinger ( 15506 ) on Tuesday January 28, 2003 @12:38PM (#5175052) Homepage
    In reality, admins running enterprise systems must remember to check what the patch fixes and weigh it against known issues it may cause. In Microsoft's case, their admins would be sure to know the service release is out. My guess is compatability testing indicated they should wait for a future patch, or until they changed something in their setup that would make any problems from the patch a non-issue.
  • Tired of patching? (Score:5, Insightful)

    by smnolde ( 209197 ) on Tuesday January 28, 2003 @12:38PM (#5175053) Homepage
    How many times have you, on a Win2k server clicked the check box labeled "Remind me in four hours" and waited for the next shift to patch the box?

    Oh joy, the pleasures of having an automated "Patch-me-now" daemon.

    Lazy admin, none the less.
  • I wonder how long... (Score:5, Interesting)

    by dildatron ( 611498 ) on Tuesday January 28, 2003 @12:41PM (#5175087)
    I wonder how long it will be before companies that are hit hard by this will start terminating those responsible. Now, obviously part of the blame goes to the one responsible for the infected machine, and part of the blame goes to the software maker (Microsoft in this instance).

    This, like most other large-scale worm or virus infections, was completely preventable. So many machines are infected due to 1) lazy admins, 2) admins who are asked to do too much and didn't have time to patch all systems regularly (possibly because of staff cuts), and 3) Complete idiots who don't know any better and shouldn't have their job in the first place.

    This particular worm largely ignored home and personal computers, due to the product it infects. However, I think a lot of companies sit back and say, "Well, I sure am glad that we have Tom to get this all fixed for us... without him, what would we do?"

    That is the problem. Those in charge need to understand that it is both Microsoft's and the admins fault for things like this to occur. It rarely "just happens" and most large-scale attacks were preventable by a month, or even a year before the vulnerarability was exploited.

    Eventually, I hope this leads to a shakeout of all the poor admins, or the managers who place too much workload on their admins so that they do not have time to do it right.
    • by jamesdood ( 468240 ) on Tuesday January 28, 2003 @12:49PM (#5175145)
      The thing to remember is this worm infected any machine running the MSDE (A scaled down MS-SQL server) So if you were running Access or Office 2000 or MS Visual Studio 6, or even Visio 2000 you could be affected by this. Most end users don't even know that they would be vulnerable and the statement "This particular worm largely ignored home and personal computers, due to the product it infects" is false. It also seems to have had an effect on certain Cisco routers. Not fun but you can't just blame "Poor Admins" as the culprits for the virualance of the worm.

  • Nailed us. (Score:5, Interesting)

    by nortcele ( 186941 ) on Tuesday January 28, 2003 @12:44PM (#5175105) Homepage
    God knows why, but our company had an NT box running MS-SQL outside the Unix firewall.
    It got nailed and then apparently had privileges to come in and nail the rest...

    Took us out for 12 hours. We are talking significant production loss here. I'm just thanking
    my luck stars that I have nothing to do with our NT setup.

    I snicker and do my little dance quietly in my cube.
  • by n3rd ( 111397 ) on Tuesday January 28, 2003 @12:44PM (#5175107)
    With the exploits going around recently I've realized a couple of things when it comes to security.

    First and foremost is secure code. Right now, almost everyone and their grandmother has a firewall. They do a good job of protecting ports a user can't shutdown totally (some NetBIOS ports) and protecting insecure applications a user or organization wants to run internally but doesn't want the world to access (NFS, NIS, etc). The majority of these exploits target applications that firewalls will usually let past such as HTTP, FTP and e-mail.

    Frankly I'm not sure how coders should go about writing secure applications, but it needs to be done. Perhaps at large organizations there should be a dedicated person or term in charge of verifying code is clear of buffer overflows and other nasties. Either way, the code itself needs to be secure or because a firewall won't do a thing. Without it even the most secure configurations will continue to be cracked.

    Second is firewall configuration. Many firewall administrators tend to forget about outbund packets. Obviously there are some they need to let out (HTTP, FTP) but when it comes to things like SQL and outbound portmap, there's really no reason. Depending on the organizations needs they can more than likely block all outgoing UDP. By doing this they can help slow the spread of worms (such as this one) and reduce liability when it comes to crackers using their systems as a point to attack other systems.

    Firewalls that block incoming packets just don't cut it, and never have. We need to have secure code and need to block unnecessary outbound packets as well.
  • by painehope ( 580569 ) on Tuesday January 28, 2003 @12:47PM (#5175125)
    another place where Unices have MS beat?
    I love the way the article makes security + patching seem such a burden on system administrators. It's one of the main functions of a sysadmin's job. Any sysadmin who thinks security patches are optional, regardless of how shitty your OS's package management + patch integration is, deserves to have their network taken down and their ass fired.
    Though I do get a kick out of thinking of the nightmare the Windows admins have keeping up to date with patches, whereas a few hundred lines of perl, and I have my own automated patching system, and RPM keeps track of it ( no rpm vs. deb flames, thank you ).
  • by Chocolate Teapot ( 639869 ) on Tuesday January 28, 2003 @12:48PM (#5175133) Journal
    ....of an horrific accident in Redmond, WA, in which the ever popular and much loved Slammer worm has become infected by a particularly pernicious dose of Windosis. A round-the-clock vigil has been in progress since Saturday, and the nations top experts have been called in to try to save Slammer. "17'5 700 34rLy 700 54y 1f w3 c4n 54v3 h1m" said pUrPle_rONniE, a pasty looking spokeman for the uninstall SWAT team. "w3 0wnz y00". This is only the 200,502,738th reported case of Windosis since 1982. The Department of Justice have yet to seal off the area to prevent further contamination.
  • by pVoid ( 607584 ) on Tuesday January 28, 2003 @12:52PM (#5175171)
    "This shows that the notion of patching doesn't work," said Bruce Schneier, chief technology officer for network protection firm Counterpane Internet Security.

    Now here's my gripe... Microsoft's being vulnerable to this is only an indication of some lazy sysadmins. It doesn't involve anyone else.

    These are typical 14 year old essay arguments, where you can prove the world by altering seemingly unessential facts: while Linux is seen as the OSS giant, and there are thousands of 'independant' people working on it, Microsoft is actually just one person... Bill. Bill has actually cloned himself 40.000 bodies, and attached a wireless receptor in each of their brains, and is brandishing every single one of those people...

    Microsoft, just like any other company, is thousands of individual people... this security vulnerability does not undermine the effectiveness of patches.

    It's like having a Canada be declared an enemy of the United States because a drunken canadian had a fist fight with some alaskan in some fucking bar. WTF??!?

    Come on people, be vigilant.

    • by MisterSquid ( 231834 ) on Tuesday January 28, 2003 @02:48PM (#5176018)

      Let's make this quick:

      MS is a collective. All of the things that individuals do within MS are actions taken by MS. You said it yourself: Microsoft, just like any other company, is thousands of individual people.

      MS cannot implement its own patching system coherently. The effectiveness of the MS patch protocol is ZERO as practiced by MS, and I mean ZERO.

      This is MS's problem precisely because members of their collective have proven that their system of patches has zero real-world effectiveness.

      If you want to apologize for MS, go ahead. Just don't say that they still might be right (about the effectiveness of patching) when they've proven themselves that patching doesn't work, even if because no one bothers to patch.

  • by aengblom ( 123492 ) on Tuesday January 28, 2003 @12:54PM (#5175177) Homepage
    My rejected submission -- more details, but a bit long. The big news in my mind was not the microsoft bit--it was that ATM machines were unavilable because of the worm.


    The worm that slowed the internet to a crawl over the weekend apparently did more damage than most originally believed. On Monday, many companies were still struggling to clean up. Financial companies and airlines seemed to be hit most acutely. Many web sites that manage payments and check loans were inaccessible. Inexplicably--and really inexcusably--some ATMS were also unavailable. Investigators are also struggling to pinpoint the worms starting point, but are having little success because it took off so fast.

    Apparently similar code was released by David Litchfield of NGS Software Inc a few months ago. Virus "author," "Lion" credited Litchfield's code.

    The Washington Post has an AP [] story up as well as this [], which is older but has some additional details. The kicker to all this--the worm hit one year after Microsoft launched its "Trustworthy Computing." That and even some of Microsoft's own computers were hit [] (NYT Reg. Req.).

    (Yep, still bitter ;-) )
  • 4 Things (Score:5, Insightful)

    by 4of12 ( 97621 ) on Tuesday January 28, 2003 @12:58PM (#5175214) Homepage Journal

    1. Everyone can gleefully gloat over them eating their own dogfood; enjoy it while it lasts.
    2. Microsoft did release a patch long ago, and I give them credit for that.
    3. But by not installing their own patches, the credibility of the argument that lazy sysadmins are to blame for Slammer is weakened. MS gives credence to other arguments: either their patches hose up other things unnecessarily, or else take too much time and effort to install properly.
    4. In the end, this whole episode will be spun to promomte TCPA.
  • by rosewood ( 99925 ) <rosewood AT chat DOT ru> on Tuesday January 28, 2003 @01:00PM (#5175225) Homepage Journal
    Ive been hearing a lot of this and thats and I was hoping to get the straight dope.

    Ive read that the patch before this thing went big was a bitch. Basically it was a lot of manual this and that updating and rebooting. Basically this meant a lot of people couldnt get aproval from management to patch the server.

    Some have said they applied the patch and still were vunerable.

    Some have said the patch fucked their server.

    Also, I think I read that the cumulitive SQL server patch that was supposed to be out a long time ago finally came out as soon as this worm hit.

    Since I do NOTHING with Sql servers, I dont keep up on this. But I do have to answer to security questions and general FUD so, for those in the know -- whats true and whats not?
    • Ive read that the patch before this thing went big was a bitch. Basically it was a lot of manual this and that updating and rebooting. Basically this meant a lot of people couldnt get aproval from management to patch the server.

      I don't understand where the difficulty comes in. You run the service pack, it extracts to your root drive (or whereever you want it to, actually.) You then run setup.bat -- That's it! You're patched!

      Some have said they applied the patch and still were vunerable./Some have said the patch fucked their server.

      It worked for me, and I can't speak of the ones it didn't work on. SP3 came out a few weeks ago, and admins should have at least had that installed.

      Also, I think I read that the cumulitive SQL server patch that was supposed to be out a long time ago finally came out as soon as this worm hit.

      Bullshit. The patch to fix the overflow problem came out half a year ago. And the service pack (which includes the patch) came out a few weeks ago.

      Don't believe everything you hear..
    • My Experience (Score:4, Informative)

      by 0xA ( 71424 ) on Tuesday January 28, 2003 @03:31PM (#5176303)
      Ive read that the patch before this thing went big was a bitch. Basically it was a lot of manual this and that updating and rebooting. Basically this meant a lot of people couldnt get aproval from management to patch the server.

      It's not that bad really, I think later versions of the patch even included a batch file to copy stuff around for you. Even without it, it only took 10 minutes... I mean really, if somebody can't handle this kind of stuff should they really be an admin?

      Some have said they applied the patch and still were vunerable.

      Yeah you have to be careful with this stuff, if you apply patches in the wrong order you can sometimesend up with the vulnerable code still there. I know a _really_ good admin that got hit with Code Red because of that. The correct order can sometimes be a bit of a mystery.

      Some have said the patch fucked their server.

      That's the big problem with this situation. I can understand why people don't have this patch or SP3 installed. You really never know what one of these things is going to do. It is common for amins to schedule a 3 hour downtime to roll something like this in, even if they have tested the hell out of it. You need time to get the damn thing back out if it screws stuff up. I deployed W2K SP3 onto my terminal servers a few months ago and it broke Office on every one of them. It didn't do that when I tested it, took me hours to clean it up.

  • by ortholattice ( 175065 ) on Tuesday January 28, 2003 @01:05PM (#5175265)
    While I had this update applied, I felt and still feel uncomfortable that it is installed correctly. The update is confusing. I wouldn't be surprised if a lot of people installed it wrong. (I believe MS now has an updated version they released _after_ the worm that is easier but haven't checked it out.)

    As an aside, the instructions are in a readme.rtf file, even though they are actually just plain unformatted ASCII text pasted into Word. Who in their right minds would have Office 2000 installed on their SQL server? Or is this supposed to be standard practice? Gee, I guess should also look into putting OpenOffice on my Linux firewall.

    Here are some quotes from Microsoft's instructions.

    In the instructions that follow, the designation refers to the path on your disk in which the SQL Server files are installed. This path is typically :\Program Files\Microsoft SQL Server\Mssql. Note that the Mssql directory may be MSSQL$ for a named instance installation.

    OK, but there is also a Microsoft SQL Server\80\Tools\Binn\ directory. What about this one?

    3. Make a back up copy of the ssnetlib.dll files from the \Binn folder and the ssnetlib.pdb files from the \Binn\dll folder.

    ssnetlib.dll "files"? Why plural? I only found one in the path they seem to reference, but actually there was another one in Microsoft SQL Server\80\Tools\Binn\. However there was no ssnetlib.pdb in the main path nor was there even a directory Microsoft SQL Server\80\Tools\Binn\dll.

    4. Copy the ssnetlib.dll files from the hotfix self-extracting archive into the \Binn folder and the ssnetlib.pdb files into \Binn\Exe folder.

    Again, how can there be ssnetlib.dll "files"? What are they talking about? Also, earlier the (non-existent) ssnetlib.pdb file was supposed to be backed up from the Dll folder, now we put the new one into the Exe folder?

    6. Test the scenario for the bug that this build fixes to verify that your problem is resolved.

    OK, so I unleash Slammer on my network to make sure the problem is fixed? (And how would you test it before Slammer was officially released?)

    (NB: some of the above may not be completely accurate, being based on old scribbly notes jotted down in the midst of confusion. However the quotes are direct from readme.rtf.)

  • by (H)elix1 ( 231155 ) <> on Tuesday January 28, 2003 @01:05PM (#5175267) Homepage Journal
    I know, I know... there are going to be tons of posts lambasting admins for not updating their boxes. Sometimes the cure is worse than the disease. Hell, last week a live update caused a catastrophic failure to the email systems. The IS boys were not lazy, did what they should, and lost 36 hours of their lives rebuilding the boxes from tape because of a bad patch.

    Patches that fix something specific are fine. Patches that add new features or change API behavior can really make a mess. I've seen plenty of kit that requires xx service pack and the latest yy version breaks it.

    As a side note, make sure you get the patch if you are running the MSDE on any of your boxes.... Same problem as SQL server - way to many vendors will fold that one into a dev version of a product. I know I almost found out the hard way...
  • by EXTomar ( 78739 ) on Tuesday January 28, 2003 @01:12PM (#5175317)
    Well this episode shows that you can drag the camel to the well but you can't make them drink the water.

    Now Microsoft is in an awkward position. They claim its not their fault: admins should have noticed the original security advisory and patched their machines. But how do they expect 3rd parties to keep up and pay attention when their own internal resources don't?

    For a full time system admin that is paid to do nothing but maintain the servers following the advisory and patching escapades is their job. However a developer working on a piece of software that requires MS-SQL Server doesn't have the time nor the energy to. Reading the patch it sounds like it isn't exactly a "click-and-go" process and is a little scary. To a developer I'm not so sure its short sightedness. I spend a lot of time working on product, not following security advisories nor do I spend a lot of time applying complex or risky patches. To a developer the risk of having an unpatched, internal usage machine is much much much less than breaking the environment and screwing up your work schedule.

    Harping on admins that got caught is one thing. Harping on developers to follow and apply every patch is futile. So futile that not even Microsoft themselves internally would try.
  • Problem is IPv4 (Score:5, Interesting)

    by Jimmy_B ( 129296 ) <> on Tuesday January 28, 2003 @01:20PM (#5175374) Homepage
    No one's laid blame on it, but I think that the real way to get rid of these worms is to transition the net to IPv6. Slammer, Code Red, Code Red 2... all of them work by brute-force IP scanning. That only works because the IPv4 addres space is so densely populated; with IPv6, a worm would never be able to spread itself that way because the odds against a random hit are astronomical. I'm not saying that this should be a substitute for keeping servers up to date, but all the patching in the world doesn't help when the problem is that some faraway node is crushed under the traffic created by a worm, and IPv6 is good for many other reasons as well.
    • I disagree about the difficulty in propagating the worm under IPv6. It might slow it down, but I was online when the worm hit and it was almost instant the way it consumed the backbones. I'd estimate that within 5-10 minutes the worm went from one end of the world to the other.

      The scary thought for IPv6 to me is that it might slow down random IP propagation, but that would probably be inconsequential when compared with the increased number of spammers that would find new life and longevity in hiding amongst the exponentionally larger IP space.
    • Re:Problem is IPv4 (Score:4, Informative)

      by trybywrench ( 584843 ) on Tuesday January 28, 2003 @03:54PM (#5176452)
      actually what made this especially bad was UDP. Not many programs run on UDP ports almost always they are TCP. TCP has a VERY important feature and that is upon a non-ack'd window it throttles back the send rate. This is a way to get congestion feedback to a host and tell it to "settle down". The problem with UDP is there is no way to tell it to slow down. Also, the fact that the Internet is a "best effort" network means that no matter what the UDP incoming rate the routers will do their best to deliver the packets. This comes at the expense of all other traffic flows in the router, no way to get congestion feedback to the host means no way to limit the incoming rate. Even if the routers just dropped the packets that still increases CPU and RAM utilization and with the volume that was happening would still probably bring traffic throughput to a trickle.
  • by redbeard_ak ( 542964 ) <> on Tuesday January 28, 2003 @01:21PM (#5175381) Homepage
    It should read "Slammer Worm Owns Microsoft" not "Slammer Worm Slams Microsofts Own".

    I'm saying that from behind Microsoft's firewall - I should know.

    It sure was a giggle on Monday seeing all the warning letters taped on every door and elevator in the building.

    Most ops stuff seems up now - as up as they ever are ;) Still, there is some reporting I usually provide our team but my data source is still pooched.

    Oh well... I can still browse slashdot.

    I figure this post is blatant karma whoring, but if it helps some geek out there smile...

    **Microsoft Confidential - Do not forward**

    All Computers Running SQL Server 2000 and

    MSDE Required to Load SQL Server 2000 Service Pack 3

    say no more!
  • by satch89450 ( 186046 ) on Tuesday January 28, 2003 @01:29PM (#5175441) Homepage

    Zero defects is not an attainable goal; it's too expensive and no one wants to pay for it.

    This article shows just what happens when you expect zero defects in the infrastructure of a large organization like Microsoft Corporation. It's not going to happen. And before someone says I'm Microsoft-bashing, I will say that this is true for the vast majority of corporations, universities, foundations, and governments. That would include Sun, IBM, Red Hat, even the *BSD folks and LKML participants.

    There is a damn good reason we won't see zero defects: employees are not measured by it. Their survival, pay raises, and promotions are based not on the number of defects they don't have, but on their contribution to the "bottom line." If you preach zero defects as Job One, then prove it by firing the people who generate defects, without exception -- including the CEO, COO, CFO, CIO, and other top brass, when they screw up.

    So now that the myth of zero defects has been exposed for what it is, what do we do about it?

    1. System administrators are going to have to re-think their perimeter access controls. This may require router upgrades to add processing power to support additional filtering.

    2. Sysadmins who have been running "mostly-open" filter configurations may want to consider moving to a "mostly-closed" configuration: deny everything except services that have been cleared for use. Don't allow arbitrary connections. Many unknowing MS SQL servers were protected from participating in this little exercise because the firewall upstream of the desktop system wasn't allowing connections to get through, even if the desktop system had a globally-routed Internet address.

    3. Computer mail order houses and computer stores should consider carefully whether they should bundle appropriate software firewall products with the computers they sell. Software configured to require the user to say "Yes, I want to make SQL server available for public access!" before 1433 and 1434 would be open.

    4. We need to ask the reporters and editors of mainstream publications to be more responsible when reporting problems like Sapphire/SQL. The facts were pretty well known, and available to those who tried hard enough to get them even at the height of the packet storm, so that reporters could make their deadlines and get the facts straight. [Names of the guilty withheld, at least for now -- they know who they are.]

    5. Tier 1 and Tier 2 bandwidth providers need to consider modifications to their Acceptable Use Policies to require some basic filtering of packets in both directions. These AUP changes have been discussed before; perhaps now is the time for them to go into effect:

      • Upstream packet source addresses must be verified at the perimeter such that the packet's return address points to a host in the network, and not to a random IP address or to broadcast addresses
      • Downstream packet destination addresses must be verified at the perimeter such that the packet is directed to a single host in the network, and not to a random IP address or broadcast address (other than multicast addresses, if such are allowed in the network)
      • As one drills down the levels of networks, packet source/destination verification must be done at all levels -- no exceptions (the excuse "It costs too much" doesn't wash when you consider that suitable packet filter technologies are available in both *BSD and Linux flavors, running on hardware that costs less than your standard business power lunch for four)
      • "Small services" (TCP 0:19 and UDP 0:19) must be blocked at the perimeter, both as source and destination ports.
      • A small number of other, specific ports must be blocked at the perimeter, those ports being identified as services that are intranet in nature instead of "global" services. The specific ports to be blocked should be determined co-operatively to avoid denying essential services to customers.
      • Encourate the use of VPN for interaction between two separated locations needing the above-mentioned intranet services over the Internet.
      • Encourage the use of abuse-prevention methods such as Network Address Translation on all circuits [cable operators take note] to block access to those systems that are NOT intended to be servers.
    6. Update the Best Practices RFCs to incorporate some or all of these suggestions, so that Internet operators around the world can participate in solving the problem.

    (N.B.: I want to point out that many USA-based cable operators are contributing to the problem by disallowing the use of NAT and VPN technologies in their apparent [alledged] quest to limit the broadband "Internet service product" to browsing and downloading files. I believe that such an attitude contributed to the problem, not the solution. I understand well the technical and business motivations for this, but I also believe that there are (U.S.) national security implications against such a policy. THINK!)

    Are any of these ideas new? NO. The only new idea is to have the Lords Of The Internet use their influence over their customers to implement them more widely.

    Good fences make good neighbors. The Internet is a neighborhood.

  • TCO (Score:4, Funny)

    by marko123 ( 131635 ) on Tuesday January 28, 2003 @01:31PM (#5175450) Homepage
    Does the cost of lost GLOBAL productivity (lost internet access in the workplace) and lost commerce (the ATMs going down) of this shizzah get get added to the total cost of ownership of MS products?
  • by Wakko Warner ( 324 ) on Tuesday January 28, 2003 @01:36PM (#5175482) Homepage Journal
    ...a lot of unemployed second-rate MS SQL admins should be hitting soon, if management have any sense whatsoever.

    That these morons basically brought the internet to its knees Friday night through gross incompetence should be reason enough to fire every last one of 'em.

    - A.P.
  • cd /raid/8.0/updates

    wget -nd -nH --mirror --no-parent --passive edhat/linux/updates/8.0/en/os/i386/ -o log
    wget -nd -nH --mirror --no-parent --passive edhat/linux/updates/8.0/en/os/i686/ -a log

    saved=`grep saved log | grep -v ".listing"`

    check=`rpm -K /raid/8.0/updates/*.rpm | grep -v "md5 gpg OK"`

    if [ "$saved" ]
    mail <<EOMAIL
    New RedHat 8.0 RPMs downloaded onto `hostname`
    Please update them:



    If there are any kernel updates, please run lilo before rebooting


    Run this in the night some time.
    When you come in, if you've got an email, run:
    cd /raid/8.0/updates
    rpm --freshen -vah *.i686.rpm
    rpm --freshen -vah *.i386.rpm

    Hey presto. Job done. And if you use Grub, you don't have to bother about running lilo.
  • by Idou ( 572394 ) on Tuesday January 28, 2003 @01:39PM (#5175519) Journal
    If it is not cost effective for MS, which faces the highest damages from such incidents (think PR), to patch its own software, how can they argue it is cost effective for ANYONE to insure that everything gets patched?

    It seems to me if one were to include the costs of patching, insuring everything gets patched, and the expected losses (I assume probality is inherently high in then non-Unix world) from the inevitable missed patch (or, nonexistent patch/late patch), MS TCO would go through the roof. Then again, maybe the entire concept of TCO doesn't matter when the most significant costs can be hidden from ignorant managers who act as the software purchasing agents of the company.
  • by wobblie ( 191824 ) on Tuesday January 28, 2003 @01:40PM (#5175523)
    No linux vendor does anything like this; it's absolute insanity, and it's half the problem with MS admins (not) patching their software - they know better.

    For years I was forced to run an IIS server which was outdated, unpatched, and very vulnerable. I couldn't update it because the service packs would break the software running on it - and the reason was that the service packs, while they fixed the vulnerabilities, also introduced all sorts of new features I did not need or want. So I was reduced to keeping a very watchful eye on it.

    The entire infrastructure of Microsoft software distribution method is simply broken, and stupid.
  • by spells ( 203251 ) on Tuesday January 28, 2003 @01:51PM (#5175608)
    Although I respect Bruce Schneier (like he cares), I think it's pretty stupid to be quoted saying "This shows that the notion of patching doesn't work," without providing an alternative solution. I would love not to patch my servers, but perfect software just doesn't exist. What options do I have?
  • by mseeger ( 40923 ) on Tuesday January 28, 2003 @02:25PM (#5175863)

    It's easy to blame someone for not having his/her systems patched. But i believe, that the average patch level on Windows Systems is higher than on Unix systems.

    Most of the Unix (espescially servers) system just run and don't cause trouble. So nobody thinks of and patches them. A 1000+ days uptime is something to make a sysadmin proud and a security adviser weep.

    As many Windopws sysadmins have trouble to debug their system in depth, in the case of problems they try to apply available patches first (second action taken after reboot). So, as Windows systems cause more trouble than Unix servers, they are better patched. Q.E.D.

    Just kidding, Martin

  • by Anonymous Coward on Tuesday January 28, 2003 @02:39PM (#5175958)
    Jan. 23, 2003

    I'm writing to you about an issue of particular importance to those of us who routinely use computers in our work and personal lives - making computing more secure. Before I share my thoughts about this in more detail, I want to give you some context on why I am sending this email.

    This is one in an occasional series of emails from Microsoft executives about technology and public-policy issues important to computer users, our industry, and anyone who cares about the future of high technology. If you would like to receive these emails in the future, please go to beMe.asp?lcid=1033&id=155 to subscribe. If you don't wish to hear from us again, you need not do anything. We will not send you another executive email unless you choose to subscribe at the link above.


    As we increasingly rely on the Internet to communicate and conduct business, a secure computing platform has never been more important. Along with the vast benefits of increased connectivity, new security risks have emerged on a scale that few in our industry fully anticipated.

    As everyone who uses a computer knows, the confidentiality, integrity and availability of data and systems can be compromised in many ways, from hacker attacks to Internet-based worms. These security breaches carry significant costs. Although many companies do not detect or report attacks, the most recent computer crime and security survey performed by the Computer Security Institute and the Federal Bureau of Investigation totaled more than $455 million in quantified financial losses in the United States alone in 2001. Of those surveyed, 74 percent cited their Internet connection as a key point of attack.

    As a leader in the computing industry, Microsoft has a responsibility to help its customers address these concerns, so they no longer have to choose between security and usability. This is a long-term effort. As attacks on computer networks become more sophisticated, we must innovate in many areas - such as digital rights management, public key cryptology, multi-site authentication, and enhanced network and PC protection - to enable people to manage their information securely.

    A year ago, I challenged Microsoft's 50,000 employees to build a Trustworthy Computing environment for customers so that computing is as reliable as the electricity that powers our homes and businesses today. To meet Microsoft's goal of creating products that combine the best of innovation and predictability, we are focusing on four specific areas: security, privacy, reliability and business integrity. Over the past year, we have made significant progress on all these fronts. In particular, I'd like to report on the advances we've made and the challenges we still face in the security area.

    In order to realize the full potential of computers to advance e-commerce, enable new kinds of communication and enhance productivity, security will need to improve dramatically. Based on discussions with customers and our own internal reviews, it was clear that we needed to create a framework that would support the kind of innovation, state-of-the-art processes and cultural shifts necessary to make a fundamental advance in the security of our software products. In the past year we have created new product-design methodologies, coding practices, test procedures, security-incident handling and product-support processes that meet the objectives of this security framework:

    SECURE BY DESIGN: In early 2002 we took the unprecedented step of stopping the development work of 8,500 Windows engineers while the company conducted 10 weeks of intensive security training and analyzed the Windows code base. Although engineers receive formal academic training on developing security features, there is very little training available on how to write secure code. Every Windows engineer, plus several thousand engineers in other parts of the company, was given special training covering secure programming, testing techniques and threat modeling. The threat modeling process, rare in the software world, taught program managers, architects and testers to think like attackers. And indeed, fully one-half of all bugs identified during the Windows security push were found during threat analysis.

    We have also made important breakthroughs in minimizing the amount of security-related code in products that is vulnerable to attack, and in our ability to test large pieces of code more efficiently. Because testing is both time-consuming and costly, it's important that defects are detected as early as possible in the development cycle. To optimize which tests are run at what points in the design cycle, Microsoft has developed a system that prioritizes the application's given set of tests, based on what changes have been made to the program. The system is able to operate on large programs built from millions of lines of source code, and produce results within a few minutes, when previously it took hours or days.

    The scope of our security reviews represents an unprecedented level of effort for software manufacturers, and it's begun to pay off as vulnerabilities are eliminated through offerings like Windows XP Service Pack 1. We also put Visual Studio .NET through an incredibly vigorous design review, threat modeling and security push, and in the coming months we will be releasing other major products that have gone through our Trustworthy Computing security review cycle: Windows Server 2003, the next versions of SQL and Exchange Servers, and Office 11.

    Looking ahead, we are working on a new hardware/software architecture for the Windows PC platform (initially codenamed "Palladium"), which will significantly enhance the integrity, privacy and data security of computer systems by eliminating many "weak links." For example, today anyone can look into a graphics card's memory, which is obviously not good if the memory contains a user's banking transactions or other sensitive information. Part of the focus of this initiative is to provide "curtained" memory - pages of memory that are walled off from other applications and even the operating system to prevent surreptitious observation - as well as the ability to provide security along the path from keyboard to monitor. This technology will also attest to the reliability of data, and provide sealed storage, so valuable information can only be accessed by trusted software components.

    SECURE BY DEFAULT: In the past, a product feature was typically enabled by default if there was any possibility that a customer might want to use it. Today, we are closely examining when to pre-configure products as "locked down," meaning that the most secure options are the default settings. For example, in the forthcoming Windows Server 2003, services such as Content Indexing Service, Messenger and NetDDE will be turned off by default. In Office XP, macros are turned off by default. VBScript is turned off by default in Office XP SP1. And Internet Explorer frame display is disabled in the "restricted sites" zone, which reduces the opportunity for the frames mechanism in HTML email to be used as an attack vector.

    SECURE IN DEPLOYMENT: To help customers deploy and maintain our products securely, we have updated and significantly expanded our security tools in the past year. Consumers and small businesses can stay up to date on security patches by using the automatic update feature of Windows Update. Last year, we introduced Software Update Services (SUS) and the Systems Management Server 2.0 SUS Feature Pack to improve patch management for larger enterprises. We released Microsoft Baseline Security Analyzer, which scans for missing security updates, analyzes configurations for poor or weak security settings, and advises users how to fix the issues found. We have also introduced prescriptive documents for Windows 2000 and Exchange to help ensure that customers can configure and deploy these products more securely. In addition, we are working with a number of major customers to implement smart cards as a way of minimizing the weak link associated with passwords. Microsoft itself now requires smart cards for remote access by employees, and over time we expect that most businesses will go to smart card ID systems.

    COMMUNICATIONS: To keep customers better informed about security issues, we made several important changes over the past year. Feedback from customers indicated that our security bulletins, though useful to IT professionals, were too detailed for the typical consumer. Customers also told us they wanted more differentiation on security fixes, so they could quickly decide which ones to prioritize. In response, Microsoft worked with industry professionals to develop a new security bulletin severity rating system, and introduced consumer bulletins. We are also developing an email notification system that will enable customers to subscribe to the particular security bulletins they want.


    In the past decade, computers and networks have become an integral part of business processes and everyday life. In the Digital Decade we're now embarking on, billions of intelligent devices will be connected to the Internet. This fundamental change will bring great opportunities as well as new, constantly evolving security challenges.

    While we've accomplished a lot in the past year, there is still more to do - at Microsoft and across our industry. We invested more than $200 million in 2002 improving Windows security, and significantly more on our security work with other products. In the coming year, we will continue to work with customers, government officials and industry partners to deliver more secure products, and to share our findings and knowledge about security. In the meantime, there are three things customers can do to help: 1) stay up to date on patches, 2) use anti-virus software and keep it up to date with the latest signatures, and 3) use firewalls.

    There's much more I'd like to share with you about our security initiatives. If you would like to dig deeper, information and links are available at 3security2.asp to help you make your computer systems more secure.

    Bill Gates

    For information about Microsoft's privacy policies, please go to:
  • Patching.... (Score:5, Interesting)

    by Tsali ( 594389 ) on Tuesday January 28, 2003 @03:15PM (#5176193)
    Let's take it to a new level...

    If a major motor manufacturer created a product line that lost the brakes when the temperature outside was -10 degrees and on an interstate, they would be liable.

    If 90% of the population used that product line and people were getting hijacked by their own transportion, there would be hell to pay.

    Now suppose that they say, "Hey! We released a recall two months ago? Didn't you take your car in to fix it? We made a post to our service centers, but you never saw it at the place you take your car? If you were running our brake-warming device (aka anti-virus software), you wouldn't have had this problem... if you were on a local road instead of an interstate, you never would have had this happen to you. Please buy more of our products. "

    I know its outlandish, but there should be some responsibility here on the part of the vendor. There is economic damage from not patching stuff, but if the patch usually breaks your car, who's left to hold the bag?

    Unless you are a mechanic and own a kit-car (aka Linux), you're tied in. That's not good.

  • by ceeam ( 39911 ) on Tuesday January 28, 2003 @04:45PM (#5176766)
    ... is that oopses like this one have exactly zero impact on their market share, companies' acceptance of MS "solutions" etc... This is not a free market as known for ages, definitely.
  • by smash ( 1351 ) on Tuesday January 28, 2003 @08:48PM (#5178327) Homepage Journal
    To those going on about patching, etc... that whole way of thinking is completely flawed.

    You have to assume there *are* holes in application software such as SQL server due to its complexity.

    Taking a reactive approach, and simply installing hotfixes are they're available will simply not work - patches are often not available until a number of days/weeks/months until after the vulnerability is known. Even if it hasn't been fully disclosed, the blackhats may well know about it, or be prompted to scrutinize that particular product more and find it before the full announcement.

    The correct way to deploy such products is to design your network with this in mind, and firewall them off from the rest of the world.

    That does NOT give you the security to not worry about patching (single layer security is bad) - keep your servers patched - but it does buy you a little time, and is an extra layer of defense in case there is a server that doesn't patch properly for some reason (file couldn't be overwritten for example), or is accidentally forgotten about.

    I can think of *no reason* why an SQL server must be accessible to the world. You have a webserver that uses it as a back-end? Give the public access to port 80/443 of that ONLY, and disallow connections from anywhere but localhost to the SQL software. Even better (and the approach I always take - I don't trust Win-X to be visible to the internet, period), install it on a seperate physical machine, firewall that machine more tightly (ie, allow SQL connections ONLY from machines that require them, such as your webserver).

    If you have client machines that need to access the database from the internet, thats what VPNs are for.

    Since I've had enough sense to firewall my servers correctly (yes, I was a clueless idiot before as well ;), I have not had a single security breach.

    I'm not saying that I'm definately immune to a concentrated attack, but you can definately stack the odds in your favour.

    Yes, it is an investment in time, and probably money - but if you want a secure network, its simply the price you have to pay these days... how much is your data/uptime worth?


If you suspect a man, don't employ him.