Follow Slashdot stories on Twitter


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

Microsoft Worms and Global Routing Instability 215

James Cowie writes: "Fresh analysis here indicates that worm propagation periods correlate very strongly with global BGP routing instability, as measured by sustained exponential increases in the number of prefix announcements and withdrawals seen in BGP message traces."
This discussion has been archived. No new comments can be posted.

Microsoft Worms and Global Routing Instability

Comments Filter:
  • Story misleading? (Score:3, Informative)

    by baptiste ( 256004 ) <.su.etsitpab. .ta. .ekim.> on Friday September 28, 2001 @09:08AM (#2363171) Homepage Journal
    The story seems to imply that the works spread faster because of BGP instability when the paper seems to be saying the BGP instability is being CAUSED by the worms.
    In this online note, we summarize our preliminary analysis of the surprisingly strong impact of the Internet propagation of Microsoft worms (such as Code Red and Nimda) on the stability of the global routing system.
    • Re:Story misleading? (Score:3, Informative)

      by DCheesi ( 150068 )
      Err, no, you're just reading too much into it. The story only mentions a correlation between the two phenomena; there's no implication of causality there. In fact, my impression upon reading it was that the worms cause the instability --probably because that's the only scenario that really makes sense to me.
    • by sammy baby ( 14909 ) on Friday September 28, 2001 @09:22AM (#2363213) Journal
      What do you mean, "seems to imply?" It states it flat out:

      Instead, we have documented a compelling connection between global routing instability and the propagation phase of Microsoft worms such as Code Red and Nimda... what were thought to be purely traffic-based denials of service in fact are seen to generate widespread end-to-end routing instability...

      If you're trying to suggest that the story submission is unfair in alleging that Microsoft worms are causing this instability... well, that's exactly what the paper is saying, eh?

      • Re:Story misleading? (Score:2, Informative)

        by leto ( 8058 )
        They say "routing instability" not "BGP instability".

        However, further down in the article they mention that people might need to give BGP packets some preference so that they don't get dropped when something like a microsoft virus sweeps through your routers, causing BGP reconnects (and thus BGP instability)

      • Doh. I completely misread Baptiste's original post. I should never reply to anything on /. prior to my first daily dose of caffiene. My apologies.
  • by minus23 ( 250338 ) on Friday September 28, 2001 @09:09AM (#2363181)
    Net instability can also be predicted if Slashdot links to a .... well anything.
  • by Anonymous Coward sad, but true:

    Global Routing is dying.
  • Microsoft IIS Worms
    Is the Worms cause or effect?
    Is IIS the cause or effect?

    If we shutdown one of them, net becomes stable?
    Is it easier shutdown worms than IIS?

    hmmm... it's a hard decision. Has anyone scanned Internet for viruses?


    "Nobody is real - Powerman 5000"
  • by disc-chord ( 232893 ) on Friday September 28, 2001 @09:18AM (#2363205)
    Very fascinating read, with lots of graphs that really strike the message home. But what is the point? Anyone with an internet connection will have no doubt experienced the instability.

    I've personally had a particularly poor router lossing my packets for the last week, and have been trace routing it from all over the country to triangulate the problem. Doing a tracert from Maine, California and Texas seems to provide a reasonable picture of what's going on with a specific router by triangulating in on the offending router... so I'm a bit unclear on why this study was called for, unless it's just to point fingers at microsoft...

    • by iso ( 87585 ) <(ofni.orezpraw) (ta) (hsals)> on Friday September 28, 2001 @10:46AM (#2363548) Homepage
      But what is the point? Anyone with an internet connection will have no doubt experienced the instability. ... so I'm a bit unclear on why this study was called for

      It's an analytical tool called a scientific proof. Believe it or not, anecdotal evidence (like you suggested) is not enough to prove your intuition that IIS worms influence global routing stability. You need scientific evidence to prove a hyptothesis such as this.

      - j
    • Well I think the point to the researchers is just to find out what was causing what they saw. This is what researchers do :) This was not about one router, it was about global routing.

      To me, the point of research like this is to point fingers at Microsoft. Microsoft can claim not to have a problem with security all they want. But if it is shown that security vulnerabilities in their system are causing instability in global internet routing, that could provide a way to show liability. Because dammit no software company should be doing anything that could degrade global internet routing.

      Currently it's hard to argue in court that a reasonable programmer might not leave some of those vulnerabilities. But if those vulnerabilities were responsible for crippling the net? I think any court would hold that any reasonable programmer would make sure their program can't cripple the internet. Meaning the billions in dollars it costs everyone attached to the net when these viruses spread, not just MS users, could be recovered from MS and give them a real impetus to build security into their systems, which is currently missing. Many of you hold spammers to be responsible when they use your network resources without your permission. Microsoft is doing the same thing by leaving these holes. Why haven't the limited patches they have been pushed by critical update? Why has Microsoft come out in the press to say that millions are unnecessarily downloading these patches in an apparent attempt to dissuade people from downloading the patches? In the same week that critical update kept insisting I download patches for Win2k that are only relevant to servers when I only use my box as a workstation?
    • by tjgoodwin ( 133622 ) on Friday September 28, 2001 @11:28AM (#2363795) Homepage
      But what is the point? Anyone with an internet connection will have no doubt experienced the instability.

      The point, is clearly stated in the article: Contrary to conventional wisdom, what were thought to be purely traffic-based denials of service in fact are seen to generate widespread end-to-end routing instability originating at the Internet's edge.

      Maybe the "highway" analogy works here. Everybody knows that the Internet goes all flaky during worm propagation, but it's been assumed that this is simply due to too much traffic. This report is saying that it's more fundamental than that: during worm propagation, for as yet unknown reasons, many of the direction signs disappear at the intersections! Not only are the roads full, but many of the cars can't find where they're meant to be going...

  • by ch-chuck ( 9622 ) on Friday September 28, 2001 @09:18AM (#2363206) Homepage
    of contributing to global worming. They need to cut back their toxic emissions immediately before it's too late to save the planet.

    • by sien ( 35268 ) on Friday September 28, 2001 @09:29AM (#2363229) Homepage
      Ha. Someone mod this up as funny please !
      But seriously, if a company makes a product that costs large numbers of other companies money they get fined. If a company's negligence causes a public resource to be degraded they get sued. Has anyone heard anything about some of the major service providers or any of the major uses launching a class action against MSFT ? It seems that they would have at least a start for a case here.
      • by Greyfox ( 87712 ) on Friday September 28, 2001 @10:46AM (#2363547) Homepage Journal
        The patches to prevent these worms were out for ages. It's just that system administrators and others never installed them. So Microsoft has quite an out there, and for some reason the businesses that whine about the costs of these worms never seem to be looking to their own admin staff and asking them why the hell those patches were never installed.
        • Today windowsupdate told me to install a patch to resolve the "Malformed Data Frame Sent to a Windows 2000 Computer Through an Infrared Port Causes Stop Error". Great. Of course my computer doesn't HAVE an ir port. But MS is pushing this patch. And NOT pushing the limited patches they have for the iis vulnerability that Code Red and others exploited.

          Explain please how that makes sense?
          • Criticalupdate is not for server admins. Hotfixes are for server admins.

            If you're a server admin and you get your security updates from criticalupdate, your intranet is in big trouble.
            • by MemeRot ( 80975 ) on Friday September 28, 2001 @12:46PM (#2364372) Homepage Journal
              Very shortly after the beginning of Code Red this ceased to be about server admins. The boxes being infected by these viruses now are home or non-power business users who have IIS enabled by default. Why by default? Because MS doesn't care about security. Why not throw in features most users won't need by default? What's the harm? Oh, we're destroying the stability of global routing? Oopsie.

              The majority of the IP addresses spreading these viruses show the default homepage if you go to them. Because the home or casual business users running these boxes DON'T KNOW what IIS is, or that they have it enabled, they DON'T KNOW that they're vulnerable or infected. These are the people that criticalupdate would reach. These are the people that need the patches. By NOT pushing this patch, MS is leaving the situation as it is, and it will never get better. To repeat - security conscious server admins are having their network hammered by this virus not because other server admins are lazy - but because many non server admins have operating systems with IIS enabled by default, and MS is making no attempt at all to reach those people despite the fact that the situation has not improved.
        • Our web site has a very low traffic (our market is very restrict).

          On the last few months I got more requests from IIS worms than requests for my home page during the past year.

          "Oh, I'm sorry, all TV sets we've produced were found to generate RF interference and degrade the signal on all the TV network. We made a circuit patch available on all our distributors. If you bought one of our TVs, please come get one and install it yourself. Now you are the one to blame."

          yk /var/log/httpd # head -1 access_log
 - - [22/Sep/2000:07:04:47 -0300] "GET /robots.txt HTTP/1.0"
          yk /var/log/httpd # tail -1 access_log
 - - [28/Sep/2001:12:19:38 -0300] "HEAD / HTTP/1.1"
          yk /var/log/httpd # grep "GET / " access_log | wc -l
          yk /var/log/httpd # egrep "(Jul|Aug|Sep)/2001.+GET / " access_log | wc -l
          yk /var/log/httpd # egrep "(Jul|Aug|Sep)/2001.+GET /default.ida" access_log | wc -l
          yk /var/log/httpd # egrep "(Jul|Aug|Sep)/2001.+GET /scripts" access_log | wc -l
          (obs: no, I don't have a /scripts directory, although I sometimes have fun with a "default.ida" perl script)
        • by anomaly ( 15035 ) <.tom.cooper3. .at.> on Friday September 28, 2001 @12:57PM (#2364389)
          It's easy to say this, but speaking as one who works for an enterprise, it's not easy to do.

          We've got tens of thousands of PCs running hundreds of applications - some internally developed, some externally developed.

          For MS security patches (or anything else) that we release into "production" we need to engineer the build to make sure it works with our OS build, then test against Tier 1 applications.

          Once that is complete, the development groups need to sign off saying that their application runs with that code.

          Specifically in terms of IE 5.5 SP2, Quicktime is no longer compatible. Sure, there's an update to Quicktime, but my point is this - how many other things stop working? Which of our internal apps are dependent on IE or subcomponents that no longer work with IE5.5 SP2?

          We don't know. Frankly, even if we thought that we knew, we couldn't be sure outside of testing.

          IE has seen 7 security patches in the last 8 months. Particularly in this economy, we can't afford the testing staff to nail each of these as they are released.

          Of course we're at risk. Now is the time to question our continued use of MS products. I'm doing that.

  • The worms produce just a kind of DDOS and routers are expected to take a hit. If there are a lot of IRCbots attacking randomly, you'll see the same.
  • A study by a fully human-created phenomenon, and yet it's so complicated it's hard to understand.

    Who said AI is not for tomorrow? The beast is already among u
  • by osolemirnix ( 107029 ) on Friday September 28, 2001 @09:25AM (#2363225) Homepage Journal
    I would assume that this effect is in part due to the nature of port-scanning a wide range of IP adresses with a small data packet. This kind of traffic is different from "regular" traffic where a lot more data gets sent along the same route.

    Consequently, since routes time out after a while (and get cached), the IP adress sweeping increases the necessity to figure out more seperate routes than usually (or FIFO caches are too small so routes get purged from the cache faster?).

    This would logically increase the load on route discovery protocols such as BGP. A whole new class of DoS attacks...

    • by mdouglas ( 139166 ) on Friday September 28, 2001 @10:10AM (#2363375) Homepage
      first off, i'd just like to say, i love it when a hardcore networking article gets posted to slashdot, the number of responses is so much lower due to the userbase having no experience with the subject; and mindless pontificating and chest beating (as in anti microsoft/pro linux articles) doesn't cut it with this subject matter.

      as an aside, i don't mean the above preamble as a negative statement about the specific poster i'm responding to.

      "Consequently, since routes time out after a while
      ...This would logically increase the load on route discovery protocols such as BGP."

      well...not exactly. when 2 routers are set up in BGP partnership they exchange an initial set of rotes which are statically set by the AS administrator, there's no dynamic discovery process. those routes are only changed under a few specific conditions : explicit changes announced by the BGP partner, or the loss of connectivity to the partner (too many missed hello packets). BGP route exchange is not based on some kind of dynamic route timeout/refresh algorithm as that would be horrifyingly inefficient.

      a few words on how routing and route caching work (this is assumed to be on an defaultless internet backbone router) :

      a packet enters the router destined for some ip address, a lookup against the routing table is done, the appropriate outbound interface is selected (this set is known as path determination), the packet is then sent to the appropriate outbound interface, re-framed, and sent out to the next hop (this step is known as switching); route caching associates a destination ip address with an next hop interface, thus bypassing the redundant route table lookup. a definate gain in efficiancy, cisco makes a number of advanced caching/switching engines that are used in thier high end core routers.

      to summarize/explain the BGP/worm paper : worms generate excessive traffic; the traffic overwhelms some routers and wan links; thus, BGP hello packets get lost or never sent depending upon traffic or router load; consequently the BGP routes are being announced/withdrawn at a high rate (this is known as route flapping). this is bad, having a route fail is not a problem, as long as it stays failed. rapidly changing states creates extra load on the router. route dampaning policies help, but with a worm creating these conditions everywhere at once the cumulative effect is instability.

      check these sites out to learn networking : /i to_doc/index.htm

      anyone who writes a wise ass follow up to this had better include a CCIE number.
      • Looks fine, Bassam would be proud :-)

      • These same high-end routers often have traffic shaping/prioritization features. You'd think that they could be configured so that the routing-protocol packets have a very high priority so that they're among the last to be dropped even at high load. If not, someone screwed up.

        • > These same high-end routers often have traffic
          > shaping/prioritization features. You'd think >that they could be configured so that the
          >routing-protocol packets have a very high
          >priority so that they're among the last to be
          >dropped even at high load.

          Not necessarily. In a lot of cases, mostly with multiple exit routers, it's more desireable for a hosed router to withdraw it's own route, presumably because you have another un-hosed router which can pick up the slack. In most cases, withdrawing a route is a lot better than advertising a route that doesn't work.
          • I think you missed the point of what I was saying. The problem that the original article talked about was BGP traffic getting dropped due to load. If that's happening, you can't add routes, you can't modify routes, you can't withdraw routes. What I was talking about was using existing facilities that allow you to prioritize traffic by type to ensure that the BGP packets get through even if nothing else does. Once you've done that, you can manipulate routes however you want to adapt to conditions.

            What's happening now is like allowing emergency vehicles to get stranded in traffic because they don't have lights and sirens. I say give them lights and sirens, let them zip past the regular traffic so they can do something about the conditions that led to the traffic jam.

            • hmm ok not necessarily a bad idea, though i don't know what advantage it would gain. BGP is a fairly static protocol, it doesn't adapt to changes very well (obviously, since it has to propagate around the world whenever something changes). I guess it could work, though personally i wouldn't try it due to all the potential bad things that could happen doing this and just keep you dampened longer.

              Ciscos have the ability to traffic shape, but that does exactly that, traffic shaping. Most of the problems that i'm seeing because of codered/nimda/etc, isn't traffic saturation, but cpu overloads due to excessive arp requests. So what we're probably actaully looking for is some sort of cpu prioritization by process instead of necessarily traffic shaping based on routing/routed protocol.

            • I think you missed the point of what I was saying. The problem that the original article talked about was BGP traffic getting dropped due to load. If that's happening, you can't add routes, you can't modify routes, you can't withdraw routes.

              Er, um, NO.

              BGP is designed for multi-pathed networks -- You have to have at least two paths into your network to be allowed to use bgp. This also means (usually) that you have at least two routers.

              If your router is so saturated that it's dropping BGP packets, this means that it's also dropping other packets. This is considered bad. Under normal circumstances, 'flapping' your route for a short period (the document indicates that BGP has a 30 second minumum) will cause some of those packets to take the 'back' route, and will (hopefully) cause enough of a strain relief on the overloaded router for it to catch up to the (normally transient) overload.

              The result of these worm attacks is that this presumption doesn't hold too well. everyody, everywhere (more or less) is experiencing overload. Quite often the traffic is internally generated, so it's quite possible that many/all of your bgp routers/routs are at or near overload. Under these conditions, flapping one router may cause your back path to overload and, in turn flap too.

              Giving a higher than normal priority to BGP packets might increase the survivability of the network under a virulent worm attack, but it would also break the inherent load-limiting effect of flapping, and generally break the network worse under normal ovarload conditions. Given how uncommon these worm attacks have been (so far), It's probably better to keep the flap effect in place.
              The article doesn't describe the flapping effect as bad. It simply uses logs of this well known and (I believe) normally benefecial effect as a way of measuring what's going on, and determining why it's happening.

              As was said in the article. Some people originally thought that the outages were delayed effects of major (localized) traumas to the net. That this isn't the case, actually indicates that BGP is working pretty well for the normal case.

              It would be nice to find a solution that can help the network to survive another worm-initiated overload, but if it's at the cost of more general stability of the network then I doubt that it would be worth it.

              Putting enough smarts into the protocol to realize when a flap-dance is taking place because of worm-type general network overloads would add more CPU load to the protocol. This might cause more cpu-overload problems, over time, than it would solve. Another solution might be to have meta-routing machines that watch the logs of BGP packets, and initiate modifications to the BGP protocol parameters to handle the change. I don't know, for sure, how much work that would be, and if it could be done within the current confines of BGP. If it requires modifications to BGP, then it could be a long time in the pipe.

              • it's also dropping other packets. This is considered bad. This is considered bad.

                It's unfortunate that your talent for stating the obvious is not matched by your ability to understand the less obvious.

                Under normal circumstances, 'flapping' your route for a short period (the document indicates that BGP has a 30 second minumum) will cause some of those packets to take the 'back' route, and will (hopefully) cause enough of a strain relief on the overloaded router for it to catch up to the (normally transient) overload.

                Flapping is undesirable. Period. Any routing protocol that didn't support load balancing across routes, without explicit route changes to flap back and forth, would be laughed out of the standards bodies. Fortunately, BGP is not in fact so poorly designed as you seem to think.

                Giving a higher than normal priority to BGP packets...would also break the inherent load-limiting effect of flapping, and generally break the network worse under normal ovarload conditions.

                Now you're just totally talking out your ass. Flapping is not an intentional method of limiting load; it's a pathological behavior that routing protocols including BGP try to avoid. "Normal overload" is of course an oxymoron, and even in more common (but still abnormal) overload conditions there's no reason whatsoever to suppose that the incredibly minimal CPU overhead associated with giving BGP a higher priority would have the effect you suggest.

                I just don't know where you get that kind of crap from. That kind of buzzword-laden but unconnected-to-reality BS might have dazzled some fresh-out MBAs back at the height of dot-com mania, but don't expect anyone with even a minimal amount of technical knowledge to be fooled.

                P.S. Either your web site is down, or your profile contains a broken link. Nice going either way.

            • Agreed, see 894 for some more info on this, in response to a post on out of band management networks.
      • I'm just a meaningless CCNA, but I'm writing to say - Good article! +1, Informative.

        Thanks mdouglas.

        -Pat, CCNA

      • Very good explanation, but there's one pseudo-misunderstanding that a lot of people didn't pick up on. Routers can normally handle a lot of traffic (well good ones can), but are still susceptible to cpu overload due to the massive ip scanning that these worms do, which overloads the arp subsystem of the router. arp is mainly to blame, not necessarily increased ip traffic.

        Assuming that the router has an interface with a larger than /30 subnet, the router has to do an arp request for every ip on that subnet during a scan, and if enough of these ips just don't exist, then it has to wait for a massive amount of timeouts, then rerequests again, etc. Endlessly.

        While you suggest that saturated WAN links could be the problem (and it very well could be given enough infected machines and a small enough link), the data i have indicates that most, if not all, of the problems within our organization are because of excessive the excessive arp requests. A router at one of our pops doesn't run bgp and our traffic data shows it had plenty of bandwidth, but it's cpu usage was at 100% for 3 hours during the first nimda attack. We see similar cpu increases on other CPE equipment with no dynamic routing or any significant increase in traffic.

        (ccie in progress ;)
  • by Uttles ( 324447 ) <<moc.liamg> <ta> <selttu>> on Friday September 28, 2001 @09:29AM (#2363231) Homepage Journal
    OK, everyone knows that word association is a powerful marketing tool. Example: Microsoft Office. When you say "office suite of programs" to the average person, they automatically think Microsoft Office. Well this article sure gives us a great one:

    In this online note, we summarize our preliminary analysis of the surprisingly strong impact of the Internet propagation of Microsoft worms (such as Code Red and Nimda) on the stability of the global routing system.

    Look on AP, Yahoo, MSNBC, CNN, and you always see "the Nimda virus" or "the Code Red virus," but I prefer the way the article said it. So from now on in your conversations with others, refer to each virus in this category as a "Microsoft Virus" and hopefully by word of mouth word association we can sway public opinion away from this crappy MS software.
    • by DCheesi ( 150068 ) on Friday September 28, 2001 @10:05AM (#2363354) Homepage
      That's fine for casual conversation, but professionals and those writing formal papers need to steer clear of this sort of propaganda. I was going to criticize Slashdot for stating it that way, until I realized that the original authors used that same phrase. Calling it a Microsoft worm is really a distortion, and it's the kind of thing that can damage the credibility of the author. If you're preaching to the choir, that's one thing; but if you're trying to produce a study that will actually persuade a 'non-believer,' you need appear as unbiased as possible.
      • When talking about biological viruses, there's nothing wrong with referring to the nasties with regards to their target. Take, for instance: "Plant virus", "Human virus" or "Canine virus". If these worms/viruses start attacking other types of systems, then I think it would be highly propagandistic for us to refer to them as as "Microsoft ". But as a matter of terminology, at the moment...?

        How about Microsoft-targeted worm?

      • by pubjames ( 468013 ) on Friday September 28, 2001 @10:37AM (#2363505)
        That's fine for casual conversation, but professionals and those writing formal papers need to steer clear of this sort of propaganda.

        I completely disagree.

        'Cancer', 'Intellectual property destroyer', 'viral like' these (amongst others) are all terms that Microsoft has associated with the GLP and hence linux when communicating with their customers. And look how effective they've been - they have got loads of press coverage about it. And the terms are misleading, and in the case of 'cancer' just downright offensive.

        To describe the Nimda virus or the Code Red virus as Microsoft worms is not misleading at all - it is difficult to argue that they are not Microsoft worms, after all.

        I think this is a great idea. May I also suggest 'Outlook viruses' as a term we should use to cover Outlook specific email attachment viruses.
      • ... but professionals and those writing formal papers need to steer clear of this sort of propaganda ...

        Whats propaganda here? They are telling the truth. Those viruses only propagate and damage Microsoft systems. They are there because Microsoft systems are so vulnerable. If it weren't for IIS, Windows 2000 etc. those worms wouldn't exist. (And don't "but others would" me - I don't see any reason why Unices, Apache, etc. would be unsafer without Windows.)

        Tell the truth. Don't hide behind words. That's a journalist's job, isn't it? And anyway, now with Microsoft distributing reports that claim Apache is also vulnerable, citing relatively harmless directory listing bugs from 1999, why should we not try to educate the public?

      • Distortion ? (Score:4, Insightful)

        by AftanGustur ( 7715 ) on Friday September 28, 2001 @11:02AM (#2363632) Homepage

        Calling it a Microsoft worm is really a distortion, and it's the kind of thing that can damage the credibility of the author.

        And what is being distorted ? Truth ?

        Until worms start to propagate efficiently on other platforms, this problem is strictly limited to Microsoft products and calling it "Microsoft worm" is a reflection of reality.

      • by twitter ( 104583 ) on Friday September 28, 2001 @11:16AM (#2363691) Homepage Journal
        Calling it a Microsoft worm is really a distortion, and it's the kind of thing that can damage the credibility of the author.

        Nope, sorry a tabbaco virus is a tobbaco virus because it destroys tobbaco crops. These worms are MS worms because they destroy MS boxes which then attempt to destroy everything. It's time the world knew about it.

        You won't hear the popular press refering to "another MS worm", however. They would not risk losing their piece of the $1,000,000 advert budget MS has for XP. As you see, "professionals", and those writing formal papers are free to call the thing what it is and should. The popular press will get it sooner or later.

        You and I should not censor our own speech for MS and their sloppy wares.

        • If Windows XP had a $1m advertising budget, EVERYBODY would bash Microsoft. The budget is much, much bigger. $1M wouldn't even cover the costs of hiring the ad agency.
      • Do you feel that anyone has hesitated to describe something as a Unix worm?

        Then why should we hesitate to describe it as a Microsoft Worm. Perhaps one should say Microsoft(tm) Worm rather than Microsoft Worm(tm), as the second form is inaccurate. But Microsoft Worm is an accurate and correct description of the phenomena (given that the term worm is correct).
      • Calling it a Microsoft worm is really a distortion, and it's the kind of thing that can damage the credibility of the author.

        Separately, they are the Nimda work and the Code red II. Together, one of the things that they have in common is that they're Microsoft based. Chances are, in the future, that most of the worms that are going to have this sort of effect are going to be Microsoft based.

        I can think of two (OK, three) reasons why:
        1) There are lots of MS machines out there that are just RIPE for infection.
        2) Microsoft has (throught negligence and/or design), set things up such that the default configuration of these machines is to be very insecure.
        3)Even if someone were to come up with a worm that could breach each and every Linux box out there, it would not, at this time, have the kind of volume effect on things that these MS worms have had.

        They are Microsoft viruses. The description is succinct and accurate. There are also likely to be more of them. It also puts some PR pressure on Micro$oft. The PR department is the one department that seems most in charge of Microsoft. If we're lucky, they will respond to it by starting to pay some real attention to security for their software.

    • Some people refer to them as MSTDs which I think is pretty funny and accurate.

    • Call me a karma whore, but if these worms were propagating through linux, you can bet we'd all (even /.'ers) be talking about linux worms.
  • What will be done? (Score:2, Interesting)

    by Anonymous Coward
    I have followed this problem extensivly in my local area... When code red came out, mrtg and numerous sites around the city showed large spikes in bandwidth usage. I have discussed this with several large corporations (Nationwide, Bank-One.. and telecom's Time Warner and AT&T) and i have heard very little about how to approache what are Application layer exploits at layer 2 or 3...
    I understand that to serve people, telecom and internal IT departments can't very well restrict ports and such based on response to each and every exploit that causes problems...
    so what can telecoms and large corporations do to cut down on meaningless uses of bandwidth?
  • by riflemann ( 190895 ) <[] [ta] [nnamelfir]> on Friday September 28, 2001 @09:32AM (#2363239)
    So...on a related note.

    If it is true that viruses create BGP instability, one can extrapolate that this is a form of
    terrorism, by disrupting international communications.

    Now - as Microsoft has done almost nothing to effectively eliminate the threat of viruses, and
    hence a form of terrorism, MS can then be seen as "harbouring terrorism".

    Didn't George W himself say that those who harbour terrorists will receive the same fate?

    It's therefore in the international communities best interests to destroy Microsoft!

    • Doesn't that... (Score:2, Interesting)

      by Hammer ( 14284 )
      ...put him in a funny spot. He has publicly wowed to destroy those who harbour terrorists and also that MS is good for America.
      So, does he go after the hand that fed him? Or will he leave MS alone and thereby in effect harbour someone who's harbouring terrorism. We all know what he promised to do to those ;-D
    • Microsoft has done almost nothing to eliminate the threat of viruses?

      The last two big worms had patches available before they started spreading... It's the folks who put freshly installed boxes on the Internet without applying the latest patches who are guilty. (Are they terrorists? Does buying Windows and installing it at home make you a terrorist? Or maybe you become a terrorist without even realizing it when someone exploits your box!)

      The only thing Microsoft has done here is make it easy for unqualified people to set up and run boxes with open, exploitable ports...

      If a particular Linux distro was as widespread as Windows is, and the default install left things exposed (which has happened on numerous occasions) then the virus authors would be exploiting holes in that distro the same way these worms are exploited.

      The thing saving Linux from worm attacks right now is low marketshare among novice users.

      - Steve
      • Well... I know of a number of people who installed the patches and were still infected by the worms that the patch was supposed to prevent. I'm told that a major cause of this is people upgrading to the next service pack (and consequently rolling back the patch.) Apparently there are a whole lot of other ways to accidentally remove the patch, many of which are day-to-day operations.

        At the time of the most recent worm outbreak (the one that used multiple exploits), I believe that there was no reliable patch available. Is that true?

        In any case, these are bugs that should never have made it into the system. I think Microsoft should have issued a recall, and made an aggressive effort to contact its customers (by real mail) in order to get this problem fixed. If the brakes on my Toyota have a major flaw, Toyota doesn't sit back and wait for me to check their web page.

      • by thrig ( 36791 ) on Friday September 28, 2001 @10:48AM (#2363565)
        Windows has anti-virus software, for windows.

        Linux has anti-virus software, for windows.

        FreeBSD has anti-virus software, for windows.

        Solaris has anti-virus software, for windows.

        Open, exploitable ports are nothing compared to the design flaws inherent in the Office document format and the Outlook family, that cause wave after wave of new virus to saunter past anti-virus software, laughing.
      • The last two big worms had patches available before they started spreading...

        If you've ever read hitchhikers guide to the galaxy, there's a scene (repeated in variation) where Arthur Dent (and then, in the variation, Earth) gets informed that the plans of immanent distruction have been on public display for a long time:

        In a locked cabinet in a dark room in the abandoned depths of the basement with a sign on the door saying "man eating tiger -- stay out!".

        The plans for Earth's destruction were on display on Alpha Centari

        In any case, the Microsoft patches were available, but not on their push list, and I'm seeing reports that Microsoft weenies were describing attempts to download the fix(es) as "unnecessary".

        The larger question, as well, is one of Microsoft not having security very high on their list of priorities. Given a choice between a whiz-bang feature, or a secure system, they seem to go for whe whiz-bang, and hope (wrongly - time and again) that hackers won't notice yet-another gaping hole.

        The problem that Microsoft users face with respect to security is not just that MS windows is a common system. It's that Windows is a common system built like swiss cheese. If Linux and Unix were designed and maintained with the lax attitude towards security that Microsoft products display, we'd have more Linux worms than a dead gnu carcas.

  • what?!? (Score:2, Funny)

    are you trying to tell me that microsoft is unstable and most likely carrying some form of a virus? thanks impossible!

  • "complete list of reasons still needs to be documented, but we suspect i) congestion-induced failures of BGP sessions due to timeouts; ii) flow-diversity induced failures of BGP sesions due to router CPU overloads; iii) proactive disconnection of certain networks; and iv) failures of other equipment at the Internet edge such as DSL routers and other devices."

  • Am I the only one that had a feeling something like this would happen. All those hundreds of thousands of simultaneous probes have to have some effect. People on badly hit networks have reported massive bandwidth loss. This is the
    • "most of the links at the Internet edge had serious performance problems during the worms' probing and propagation phases"
    part of the article.

    Mind you, Nimda is probably gentler to non Windows systems, because it checks if the victim is vulnerable first, whereas CodeRed sent itself anyway. So although Nimda fills your logs quicker because it checks 16 or so backdoors for each attack, it probably, IMO, sends less data.

    • HA! Well, you're right, but for those of us who run small family-oriented servers, those 16 probes per, and the 16 emails from my IDS *DO* dramatically slow me down.

      Fortunately, Apache is immune, and I haven't had any real problems. But with Nimda, and to a lesser extent, CR, I have to lose email service for about an hour a day while the error reports clog my inbox.

      I want the logs to give to our ISP (since most of the top probers are on our subnet) but I'm thinking I may have to compromise my IDS to cut out some of the crap...

      - Apache on WinNT...Mmmm!

  • slashdotted (Score:2, Informative)

    by kingdon ( 220100 )

    I've put up a mirror [] (article there now, images should be up by the time you read this).

    As for the article itself, this kind of published analysis is what makes the internet great - compare with the telephone system where each company keeps (more of) their analysis to themselves and engages in more finger-pointing.

  • Oh, wow. (Score:3, Interesting)

    by jd ( 1658 ) <> on Friday September 28, 2001 @09:40AM (#2363272) Homepage Journal
    You mean, if the Internet gets saturated by bizare routing requests, it puts its feet in the air and dies?

    I'd never have guessed.

    Seriously, though, this does strongly suggest that merely using NAT and crude approximations of heirarchical routing are not enough. The networks aren't capable of tolerating the kinds of loads even a humble skript can put on them.

    In short, we need a better routing system, better IP stacks, a more stream-lined structure, and better load-balancing. In short, we need IPv6, if we're to survive anything but these relatively feeble virus attacks.

    (And they are feeble! In comparison to what could be done. The world is very, very lucky.)

    Oh, and we also need a stronger backbone. T3's don't cut it, in a world where T4's are "standard items" and high-speed optics of up to 4 Tbs are potentially usable tomorrow.

    When you start upping the bandwidth across the board by 2-3 orders of magnitude, the impact of a few flea-bag packets will not be noticable. For that matter, the impact of a major world event (such as the Starr Report, or the WTT disaster) would not bring the information infrastructure to its knees.

    *Orator Mode On* Now, more than ever in the history of humanity, our society, our economy and our security depend on good lines of communication. No expense is too great, because the price of failure is greater still. This truth has tragically shown itself these past few weeks, and no amount of money can undo a single death, reverse a single bereavement, or heal a single injury.

    Forty billion dollars has been allocated to the cause of chasing shadows, yet we know that shadows can never be caught. A mere four billion, on shining the light of information around the world, would have gone a long way to prevent the shadows from being there to start with.

    Terror, fear - these are weapons that rely on ignorance and superstition. Without ignorance, terror has nothing to hold onto. Yet ours is a society that lives in ignorance. We have computers on our desks that are many hundreds of times more powerful than the ones used to put man on the moon. Yet those computers can be crippled by a simple forwarder virus, and the users of those computers do not wish to know. The dark is much more comforting than the light, even though it is the dark, not the light, that these viruses can grow in. Perhaps, because in the light, you do not need comforting. There is no fear to be comforted over.

    Someday, maybe, people will become less frightened of living in understanding. When that day comes, the terrors of the night will no longer threaten.

    *Orator mode off*

    • Re:Oh, wow. (Score:2, Insightful)

      by pohl ( 872 )
      The networks aren't capable of tolerating the kinds of loads even a humble skript can put on them.

      Isn't that a little like calling a forkbomb "a humble process"?

    • Forty billion dollars has been allocated to the cause of chasing shadows, yet we know that shadows can never be caught. A mere four billion, on shining the light of information around the world, would have gone a long way to prevent the shadows from being there to start with.
      People are proposing that the World Trade Center be rebuilt to "spit in the terrorists' eye". I suggest we take that money and put it into high-bandwidth networking infrastructure for a 100-mile radius of New York city. The vast majority of workers in the WTC were information workers - they didn't really need to be physically close to each other to do their jobs.

      Let's spit in the terrorists' eye by presenting them with smaller targets, and doing business more efficiently to boot.
    • IPv6 will not solve this problem, which has nothing to do with NAT or load sharing. The key element of the solution is to provide a higher priority (CoS/QoS) for BGP traffic, and to somehow limit the amount of CPU spent on ARPing for non-existent IP addresses on the router's directly-connected subnets.

      I am all for using IPv6 where appropriate, but it's irrelevant here. Putting in bigger pipes is expensive, and may well just make things worse (a Pentium III can just about saturate a gigabit ethernet link - what if some well-connected hosting centres get infected and start spamming the net via these larger pipes?).
  • MSGP (Score:1, Redundant)

    by artoo ( 11319 )
    In an effort to reduce confusion regarding the correlation between IIS/MS Windows viruses and worms and degredation in internet traffic, Microsoft has announced the realease of their own global routing protocol, MSGP.

    "MSGP has taken a few days to develop this great technology using some of the brightest minds from around the world. Incorporating transfer of information using FEP (, we can ensure that when a virus hits, all internet traffic will come to a screaching halt" a Microsoft spokesperson said at yesterdays press conference.

    Cisco has announced they will have firmware revisions tomorrow to incorporate this into all their products.
    • Heh, and then once MSGP is implemented, people could set their routers to drop all packets from MSGP sites, and eliminate the M$/IIS viruses/worms.
  • A communications disruption can mean only one thing: invasion.
  • by Anonymous Coward on Friday September 28, 2001 @10:18AM (#2363407)
    What we are seeing here is evolution happening on the internet. When we (humans) became the dominating species on earth, viruses started spreading amongst us. The same thing is happening among computers now!

    We have two choices to fight this problem:
    1: We can try to fight it using antivirus-programs, which is equivalent to using medicine to cure our viral diseaces. We already know that this means fighting an uphill-battle, because protection against the unknown is hard, if not impossible.
    2: We can try to bring more diversity to the operatingsystems and programs we use. This would automatically decrease the viruspopulation, because a virus designed to infect more than one program/os/specie, would have to be far more advanced, and would thus lower the probability for it's existance. And in the case of computers, the bugs on one platform/program is rarely the same as the bugs on another.
  • by Greyfox ( 87712 ) on Friday September 28, 2001 @10:48AM (#2363564) Homepage Journal
    Whenever a popular product shows up on Windows, Microsoft usually ends up either buying the company or writing their own version which sucks for the first few versions. So when will we be seeing MS Worm Version 1.0?
  • A Simple Solution (Score:5, Interesting)

    by Anonymous Coward on Friday September 28, 2001 @10:49AM (#2363573)
    One of the inherent problems with all routing protocols is that rely on inband announcements and updates, and communciate state purely by reachability. This is clearly a flawed approached on heavily loaded links and routers. This problem has already been addressed worldwide on the telephone network with the introduction of SS7. One of the key aspects of SS7 is that it is transported over an Out of Band network (the actual transport may be on a dedicated timeslot on a SONET link, but the basis is that the link is dedictated to management).

    By implementing a low throughput (say 64K -256K - this requires more analysis) management network, the ISPs could be certain that the state of the BGP peering sessions and the integrity of the UPDATE messages are always intact.

    One of the key aspects/benefits of BGP is that unlike other routing protocols it does not advertise routes in the simple - "here's my routing table" messages that protocols such as RIP and while less so, but similarly, OSPF and ISIS use. BGP relies on TCP sessions between peers. On connection the entire known (or filtered via policies) short test path routing table is exchanged. After this the link stays idle, with the exception of TCP keepalives, until an UPDATE message is sent to communicate that a new route is added or an existing route is removed from that peer's routing table. Also BGP does not assign any significance to the port that receives the information - merely the peer. This all makes BGP inherent scaleable, stable and reliable - unless resources are not available (CPU, memory, buffers or links). TCP is the reliability mechanism here. The presence of the TCP session validates all the routes learned via that session. The absence of the TCP session invalidates all the routes and causes them to be withdrawn for that TCP session.

    Maintenance of the TCP session stability is key to the stability of the routing table. With over 80,000 routes on any BGP full update, the processing needed to cope with multiple TCP sessions failing or starting is immense (and probably better servered by a UNIX platform than by a router to be honest).

    SS7 uses a mechanism whereby UNIX servers process the routing information and create the core routing table - note: table is the key - it is not the path the data or calls follow. Building a similar architecture within the Internet would allow routers to have one or two TCP sessions to BGP servers (a concept already grasped with route reflector servers) and dedicate their CPU to forwarding packets etc. The dedicated servers never need to see a packet to be forwarded - it's just not that important to BGP, so they have no need to be on the same physical cables/links as user packets. This architecture would take some rethinking but not would not be outside the plans of most ISPs, and definitively not outside the skillsets.

    Clearly the next problem then becomes low speed customer connections. Again the Telco industry has addressed this problem with ISDN - with the B channels. For these lower speed connections, there is no need to change the existing model. Losing one customer here or there is nothing (UPDATEs on BGP are typically well over 100 a second at NAPs) and would be catered for simply.

    The NAPs could merely serve as routing table peering points, and not data transfer points - again another area of congestion.

    The Internet is proving to be reliable and a trustworthy international communications medium, the next step is to make it even more robust, and truly scalable. Using OOB management is the obvious next step to this goal.

    GMPLS is being touted as the next step for ISPs in terms of exchanging routing information in an OOB network. This is only one aspect of the work that is being done there.
    • Is moving routing info out of band not just guaranteeing QoS for that info? So why not guarantee QoS in band?

      Excellent informative post BTW.
    • Re:A Simple Solution (Score:3, Interesting)

      by darkonc ( 47285 )
      The in-band nature of the Hello packets, loss of which causes the 'flapping' is not an accident or an error. It is a feature. If you lose the hello packets, then chances are that you're losing other packets as well. This means that this branch of the network is overloaded and you should try another path.

      Lost packets cause retries -- which cause even more traffic. If your problem is overload, you are far better to try another path than to lose packets and generate (overall) more packets through retries on the shorter path.. If all inbound paths to a network are overloaded, then the whole network is overloaded, anyways. You might as well just drop the packet, and give the overloaded routers that 30 second flap time to catch up to the backlog.

      If you took those packets out of band, then you'd be needing another method to measure packet loss... This would require more CPU and/or more packets (bandwith) -- thus making the whole problem even worse.

  • by erroneus ( 253617 ) on Friday September 28, 2001 @11:21AM (#2363729) Homepage
    Okay, I just put the subject to troll for readership... Hehehe.

    Actually, though there may be a direct connection between routing problems and Code Red/Nimda activities, it's still a routing problem and to my regret, I can't lay any direct blame on Microsoft for this one.

    Okay, it only runs on Microsoft platforms... That's not enough. If the probes/propogation (as opposed to sheer traffic) are responsible for this then it's an issue that should be addressed with the router people. Clearly their firmware isn't written well enough and should be patched to handle this problem.

    Additionally, ISPs should start cutting off infected users without hesitation now. The attacks are now more than simply annoying in the way it fills up my logs. They are now affecting the whole damned internet. This affects just every commercial interest and should be motivation enough I think... (complaints of the people are never enough, but start playing with or threatening money and you will get someone's attention eh?)

    What are the positives surrounding Code Red/Nimda? Well, though they have managed to keep their sunglasses on it's still a black eye for Microsoft. And while the argument has been made that patches have been available long before this mess has started, blame can be placed on Microsoft for a different reason.

    It's not the presence of patching that is at issue. Rather, it's about default configuration(s) at install time and Microsoft's neglect over issues of reasonable expectation that its users are smart enough to to know how to turn things off or even know they are running.

    Microsoft's users, as Microsoft is aware, tend to install "everything" when installing their OS. Why? A number of reasons -- because they don't want to miss out on any cool toys or because if they need something later, they don't want to be forced to reboot to use it. Microsoft is aware of this.

    Microsoft knows that a majority of its usership is not trained to understand the implications or potential problems of running services on the internet. These same users cannot be reasonably expected to understand beyond "if it ain't broke don't fix it." Unpatched, their servers appear to be working JUST FINE don't they? So the infected users probably don't believe they have a problem either because they don't see the symptoms or they don't realize they are running IIS at all.

    Microsoft, as a mature and responsible technology company marketting to idiots must share more blame than they have been accepting at this time. This might be seen as Microsoft serving its "MS Coffee" too hot for its customers. (ref: the lawsuit where the woman sued McDonald's for serving coffee that was too hot and was negligent in affixing the lid on the container.) They have overestimated the intelligence of its usership for far too long and now this is the price we all pay.
    • by superdk ( 184900 ) on Friday September 28, 2001 @01:47PM (#2364671)
      Additionally, ISPs should start cutting off infected users without hesitation now.

      Some ISPs do. I know because I get to cut them off after giving them a warning and ample time to fix the trouble. What's the problem with all of this?

      Imagine the following...

      Hi, this is Joe Tech from ISP X's Network center, we're seeing that your machine on x.x.x.x is infected with Nimda and this is affecting our network. Your service will be suspended if you don't take care of this.

      Customer: uhhhh... how do I fix that? Will the guy at Dell fix it? Why can't you just fix my server and keep this from happening again?

      My point, for every 10 business customer's I have only one of them knows A) they even have a web server on their connection B) they had their server's pants down to the whole world C) what nimda is.

      besides, people paying business T1 prices don't like being shut off right or wrong.
      • You have made my point BEAUTIFULLY. THIS is exactly why Microsoft should be held liable. Your company is blameless on this matter and they should be cut off regardless of their feelings.
      • Oh yeah, on a side note, this is a business opportunity for me!

        If any of your customers are in the Dallas/Ft. Worth area, my email address is: or

        Have them send me an email and I'll take care of them for a fee! :)
  • The top ten downloads according to MS themself are......

    Top Downloads
    1. Internet Explorer 6
    2. Internet Explorer 5.5 Service Pack 2
    3. Windows Media Player 7.1
    4. Internet Explorer Security Update: (IE 5.5 SP1 and Internet Tools)
    5. DirectX for Windows 95, 98 and Windows Me
    6. MSN Messenger Service
    7. Internet Explorer 5.01 Service Pack 2
    8. Internet Explorer Security Update: Late May 2001 5.5 SP1
    9. Internet Explorer Security Update: (IE 5.01 SP1)
    10. Office 2000 Service Release 1a (SR-1a) Update

    Yes.. about half of this list comprises security updates to the MS browser.
  • The article opens with: Many successful academic and commercial projects use direct traffic measurements (such as ping, traceroute, and web page access data) to study the structure and dynamics of the Internet. Such efforts are inherently limited by the locations of probe points required to 'cover' the Internet meaningfully. Compounding the problem, there are no effective shortcuts - simply placing agents throughout the Internet's core, as done by several commercial services, only builds up a picture of core-to-core traffic latencies and losses that has no power to predict the true "Internet weather" that end users actually experience at the network edge.

    This is just plain wrong. It is quite easy to obtain latency measurements of the edge starting from the core.

    Let E1 and E2 be points on the edge. If you have enough agents in the core, you will find an agent A in the path from E1 to E2. Then you can easily compute the latency from E1 to E2 by ping from A to E1 and from A to E2.

  • Film at 11. (In Windows Media format, of course.)

  • Border Gateway Protocol (a routing protocol, if it's not obvious by now.)
    An active member of the SAT*
    * Society Against TLAs**
    ** Three Letter Aronyms

"Probably the best operating system in the world is the [operating system] made for the PDP-11 by Bell Laboratories." - Ted Nelson, October 1977