Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Microsoft

W2K and MAC OS9 Flood Root Nameservers? 238

wizzy writes "Irelands toplevel domain registry has a notice on Microsoft and Apple DHCP clients sending dynamic DNS updates per RFC2136. The problem is they are not sufficiently careful about where they send it if they are in RFC1918 space - usually used for behind-firewall addressing, which is where they usually are.. This is resulting in bogus updates being sent at the rate of nearly one million an hour to root nameservers, only to be rejected - as reported on the NANOG mailing list."
This discussion has been archived. No new comments can be posted.

W2K and MAC OS9 Flood Root Nameservers?

Comments Filter:
  • by JHromadka ( 88188 ) on Sunday April 21, 2002 @11:39AM (#3383363) Homepage
    With Photoshop 7 out and this, now Mac OS9 users have an even better reason to upgrade to OS X - "to save the Internet." :)
    • Re:Upgrade time! (Score:3, Interesting)

      by 0x0d0a ( 568518 )
      Frankly, I'd rather see the OS9 boxes fixed.

      Apple, at least, is generally pretty good about putting out bugfixes for old products -- they make most of their money on hardware, and don't have a huge incentive to force people to buy a new OS to get their computer to work properly. OTOH, I don't think they ever fixed all the TCP/IP exploits in the latest version of Open Transport that the System 7.5.5 line could run. :-(

      Microsoft has been even less good about putting out free fixes for their old products. There are too many known problems that aren't going to get fixed in Win 95 and NT. They also don't usually backport libraries -- I fondly remember someone hacking up the binaries of Win2k's DirectX 5 implementation to work on WinNT. It let me run several DX 5 games that wouldn't otherwise work on NT 4. MS, however, never released DirectX 5 for WinNT. Why would they? It was a big incentive to get peopel to buy Win2k.

      MS uses compatibility issues and a lack of bugfixes, not features alone, to drive upgrades of their software. :-(
  • Firewalls (Score:4, Informative)

    by chrysalis ( 50680 ) on Sunday April 21, 2002 @11:39AM (#3383367) Homepage
    Yet another reason to use firewalls to filter _OUTGOING_ connections and not only incoming ones (the other reason : to avoid backdoors) .

      • Yet another reason to use firewalls to filter _OUTGOING_ connections and not only incoming ones (the other reason : to avoid backdoors) .

      Do many firewalls have the capability to inspect outgoing DNS updates to deterimine if they are valid or not? I'm no expert in firewalls, but I've not seen this capability.

      Now, granted, you could and should block outgoing DNS updates that aren't coming from the machines you'd expect them to come from, but the DHCP servers are often responsible for DNS updates, in my experience. Maybe there's something fundamental I'm not getting here...

      • Re:Firewalls (Score:5, Informative)

        by barberio ( 42711 ) on Sunday April 21, 2002 @12:34PM (#3383563) Homepage
        (Begin liestochildren style technical summary)

        In a proper DNS system, you dont have outbound DNS querries except from the DNS server in your network. Hence, blocking all outbound DNS querries works. Each client in the network should be set to querry the networks DNS server, and this in turn querries other servers. (DNS is a recursivly distributed network, your DNS server will pass on your querries on the clients behalf)

        Clients should not have to directly querry DNS servers off site or outside of your ISP. Clients should never directly querry the root servers.

        What is happening here is that various ISPs and Companies which have large amounts of desktop PCs getting their information via dhcp. These do some house keeping on boot up. If the settings are screwed up either on the desktop or the server, then the dhcp will send off querries and updates to DNS servers it thinks it needs to.

        So, if you'r so eleet that you set your internal home network to be slashdot.net, with little nodes such as www for your webcache, you might be causing the real slashdot.net problems. This will be because the dhcp gets confused and thinks it needs to report to its higher up level, the real slashdot.net DNS servers.

        If you just have bare nodes like 'foo' and 'bar', then dhcp can be screwed up so it trys to report to the higher up level, the root servers.

        As you can track down every system and user who has these things malset, you have to filter on firewalls.
        • I had this problem a while ago. The goons at wzr.net [wzr.net] own org.com.kg [org.com.kg] (stupid, I know) and allow registration of *.org.com.kg. My default DNS server was in a .com.kg domain, causing .org domains to occasionally map to .com.kg. One day Slashdot.org even pointed to wzr.net, saying "Slashdot.org is available! Register today!" I e-mailed a quite harsh message to keith, the owner, and received back only oh boohoo..dumbass. Some people just don't belong on the Internet...

          Anyways, if you ever are redirected to "WebZone Resources v3.0 - asdf.org is still available!" contact webmaster@wzr.net [mailto] and give him a piece of your mind. Obviously, I tried speaking to him about this issue but to no avail. Remember that's webmaster@wzr.net [mailto].

  • This reeks of something that should've been caught in user testing. Unless, of course, Microsoft and Apple decided that they didn't care about the operators of the root nameservers.
    • I thought this sounds more like a case of misconfiguration than a bad server itself.

      Also, assuming that people are DHCP'ing on a local 192.168.* address space, shouldn't upstream routers (especially those on cable companies and the like) automatically filter out any packets with local addressing as opposed to forwarding them?

      Infact you'd think they'd filter out ANY DHCP information coming from their subscribers as opposed to sending it out publically?
      • If the problem is the private IP's attempting to update DNS records then they have to have been nat'd or masqueraded in someway, so short of parsing EVERY DNS packet there is no way to tell since the source address will the user's public IP
    • Actually this does not sound at all like an issue that should've been caught in user testing. There is no magic to software testing, and it's a thoughtless misconception to think that "good" software testers will catch every conceivable issue. Software testing catches what the software testers are looking for. Any other issues have to be fairly obvious to be caught, in most cases.
  • just another reson (Score:2, Insightful)

    by Kaoslord ( 10596 )
    just another reason to start using mac os X... or lets start educating people, i wonder how much resources those bad-changes make anyways....

  • Their name servers are under the "IE" domain...
  • Christ! Which link is the real story?
    • Re:Too many links! (Score:2, Informative)

      by Anonymous Coward
      I believe this is the actual notice.

      http://www.domainregistry.ie/tech/dynamic-dns.ht ml
  • How to Fix? (Score:3, Insightful)

    by 1stflight ( 48795 ) on Sunday April 21, 2002 @11:43AM (#3383388)
    Before everyone jumps down MS's throat (or Apple's) does anyone know how to reconfigure a system to fix this issue?
    • Re:How to Fix? (Score:5, Informative)

      by schon ( 31600 ) on Sunday April 21, 2002 @11:58AM (#3383443)
      No idea about the Mac, but instructions for Windows can be found at http://www.isc.org/ml-archives/bind-users/2000/11/ msg00109.html [isc.org]

      It's pretty funny that the "Win2K is as good as Unix because you don't need to reboot it to change settings" mantra that I hear from MCSE's doesn't apply to this :o)
      • It's pretty funny that the "Win2K is as good as Unix because you don't need to reboot it to change settings" mantra that I hear from MCSE's doesn't apply to this :o)


        Interesting. Thanks for the link. But you don't need to reboot. Just stop and restart the service with the command line or GUI interface.

        You very seldom need to reboot under Windows 2000 or XP. Some *nix advocates like to claim that Windows administrators don't know what they're doing. But it's often clear that those advocates are just as clueless where Windows systems are concerned.
      • Also, file this under "things they don't include in Microsoft Official Courseware."

        I'm scurrying to fix this now.
    • Re:How to Fix? (Score:5, Informative)

      by sabi ( 721 ) on Sunday April 21, 2002 @12:21PM (#3383527) Homepage
      On the Mac, disable the "DNSPlugin" Network Services Location plugin,
      in the Extensions folder. This applies only to Mac OS 9.0 through
      9.2.2; the 8.5-8.6 version of NSL didn't have DNS update support (it
      answered SLPv1 broadcasts only, and might have registered with a SLP
      DA, I don't remember); the OS X version of NSL doesn't have it
      either.

      Also note that this registration does not happen always on the Mac,
      only if you enable network servers that use NSL (primarily the
      personal AFP/file sharing and Web sharing services). I've never
      enabled them, so I've never seen this.

      Another thing to do is just set your domain so it's one whose
      nameservers you control :-)
    • Re:How to Fix? (Score:2, Informative)

      by frogdeep ( 575175 )
      With Win2k client you can:

      1. from start menu you choose
      setting -> network and dial up connections
      2. from network and dial up connections
      right click local area connection properties
      3. from local area connection properties
      click internet protocol (TCP/IP) properties then click properties button below
      4. from internet protocol (TCP/IP) properties
      click the advance button
      5. from advance TCP/IP settings
      click DNS menu bar
      6. from DNS sub menu
      uncheck "register this connection's address in DNS"

      and it is fixed :)~~
  • by chtephan ( 460303 ) <christophe@NOspAM.saout.de> on Sunday April 21, 2002 @11:49AM (#3383402) Homepage
    I know these problems. In my small ISP company, we ar running our own nameserver.

    The logs are flooded from rejected name server updates (several hundreds a day).

    They are mostly coming from misconfigured W2K servers from our customers, running their intranet with DHCP and using the same domain as in the real net.

    Sadly, we contacted the administrator, but he didn't have a clue what I was talking about (they're justig running windows on their server because they know windows...)

    Usually I would suggest to use an internal domain name that doesn't exist in the internet and just "masquerade" the mail domains. So resolving internal addresses from extern fails if some information slips out and the internal servers won't resolve some external name server to contact when an internal server should be.
    • I've seen this very problem, and worked out a a quick and cheap (read Free OS capable of running a DNS server) fix for this; two sets of DNS servers. What you do is set up one set of DNS servers to act as authoritive servers for all your domains, and another set that actually does DNS resolution for your customers. You firewall the former set so that they cannot receive DNS requests from your IP space, except from your trusted DNS servers.

      The only DNS zones the the authoritive set know about and can answer queries for are your own - the resolvers work as normal DNS servers that answer any query coming to them in the normal way. This works like a charm, protects your DNS from DDNS updates and other hacky crap that shouldn't be allowed on the Internet. Oh and if you understand your chosen DNS daemon the configuration is probably easier too!

  • Forget firewalls (Score:5, Informative)

    by CounterZer0 ( 199086 ) on Sunday April 21, 2002 @11:49AM (#3383405) Homepage
    They only solve a SYMPTOM of the issue. These people need to set their systems up correctly! Either a) install MS-DNS and point your boxen at that, or b) use BIND, but ENABLE dyn-dns and stop this traffic at the local level.
    And if you use a RFC1918 address space, your DNS server should have reverse lookups enabled for that address space - even a split zone so the world won't see them - and that will a) help management of the network easier, and b) prevent problems like this from happening ;)
  • The root nameserver's initially thought that they'd been linked to by /. daily, but then realized that nobody cared about them :)
  • Popular domains (Score:5, Interesting)

    by SealBeater ( 143912 ) on Sunday April 21, 2002 @11:51AM (#3383416) Homepage
    Another problem is that people are naming their boxes after popular domains
    that they don't own, and the dynamic updates are pounding the hell out of the
    domain owners nameservers. If anyone here is doing this, owl.com and jove.com
    were two of the domains named.

    Sealbeater
    • Re:Popular domains (Score:1, Interesting)

      by Anonymous Coward
      Easily fixed. Pick a TLD that's not in general use. My NAT'ed LAN uses a TLD of .ether, so the only way I'd be hitting anyone was if my requests were getting managed by an alt-root server (Which they're not), and there happened to be someone running an alt-root system in the TLD of .ether .
    • They don't even have to be popular domains.

      Back In The Day(tm) when I was first setting up my home network, I didn't know jack shit about DNS. I knew it resolved names to IP addresses, but I didn't _really_ understand how it all worked. So I figured... I'm on a network, and it's local, so my domain is gonna be 'local.net'. Worked great. Then one day I got a flash of inspiration... 'whois local.net'. A *real* domain record came back with that domain name. Whoops. I very quickly changed everything over to 'local.lan' instead, before I caused any headaches. ;)

      - Jester
      • Shouldn't your local domain be just "localdomain" (without any top-level domain)? Linux installations typically default to localhost.localdomain, and I think that's the standard.
        • IIRC, *.localdomain. is used for individual hosts only; localhost.localdomain is always bound to the loopback interface in my boxen. I don't know if it would work for whole networks... ???

          Besides... it doensn't have the same ring to it. 'hermes.localdomain' or 'hermes.local.lan' (or as I had it before, 'hermes.local.net'). Might be just me, but I think the latter has a nicer sound to it.

          - Jester
        • Blockpoth the quoster:

          Shouldn't your local domain be just "localdomain" (without any top-level domain)? Linux installations typically default to localhost.localdomain, and I think that's the standard.

          No. (Although using ".localdomain" doesn't suck as badly as naming your private network "slashdot.org" and assuming that your NATbox will prevent anyone from seeing this posturing..) In practice, using ".localdomain" probably won't break anything as a pseudo-TLD for an RFC 1918 [ohio-state.edu]-conformant private IP space, presuming you're talking about a home network that's not going to have anything complex depending on absolutely strict, standards-compliant DNS behavior, but it's actually defined as a domain "having an A record pointing to the loop back IP address and is reserved for such use. Any other use would conflict with widely deployed code which assumes this use." I.e. for DNS purposes, the only .in-addr.arpa domain that should map into localdomain is 127.in-addr.arpa -- this is the class-A netblock for your loopback interface(s), which all have the form 127.#.#.#.

          RFC 2606, "Reserved Top Level DNS Names", [ohio-state.edu] says that the TLD for a private network space should be one of the following:

          • .example
          • .test
          • .invalid
          (Note: there's no (technical) reason the TLD has to have three letters or less.)

          Ole
    • At one point, there was an IETF draft where .link was proposed as a TLD for internal use only, a sort of equivalent of 10.x.x.x in DNS. You could try that - even though the draft has long ago expired, I would suspect nobody will take that TLD for now.

      Damn I posted. At least it wasn't anything insightful.

      • At my former employer's office, we used .priv as our TLD.
      • Just got finished setting my 2K box straight. Yeah, I think that ICANN should think quite strongly of setting aside .LAN as a non-routable TLD. Simple, looks like a real TLD, but can't get out on the Internet. Just like non-routable IP addresses: 10.x.x.x, 192.168.x.x and those Class B's that nobody uses but are there anyway.

        I didn't know about the attempt to codify .LINK as a non-routable TLD, but .LOCAL was once proposed and is often used as an example in books about TCP/IP networking. .LAN, however, has the advantage of looking like a "proper" TLD. (at least Stateside, anyway...)
  • by Anonymous Coward
    There are a couple thousand Windows machines of various flavors inside my network and they are constantly generating crap lookups. I see my poor machines forwarding them to the outside, no doubt pissing someone off.

    Where 'FOO' is one of our servers:

    FOO.k12.co.us
    FOO.co.us
    FOO.us
    FOO (this is what hits the root servers)

    These things are trying to do DNS even when WINS would have a perfectly good answer. Multiply this by thousands of lemming systems and you have a bunch of load that should never be there.
    • I know that if you just type in "Foo" into your average windows web browser (IE 5+), it will iterate through the usual TLD's trying to find a match, and if not ,will then go to your default search engine.

      Probably what you're seeing here. What you need to do is convince people not to just type a word into the address bar, and get them to use Google instead.
  • NS records (Score:3, Informative)

    by 3247 ( 161794 ) on Sunday April 21, 2002 @11:56AM (#3383436) Homepage
    I wonder if adding NS records for the bogous in-addr.arpa domains would help, i.e.:

    168.192.in-addr.arpa NS 192.168.1.1
    10.in-addr.arpa NS 10.0.0.1
    ...
  • by caluml ( 551744 ) <slashdot@s p a m ... e r e.calum.org> on Sunday April 21, 2002 @11:56AM (#3383437) Homepage
    A Microsoft spokesman said, "Thing is, is that those root nameservers would all be fine if they were running Win2K DNS services. " :)
  • by interiot ( 50685 ) on Sunday April 21, 2002 @12:00PM (#3383461) Homepage
    Here's a page [domainregistry.ie] detailing how to check this in Win2K and OS9. I'm glad I check because I was misconfigured.

    Specifically, if your WinXP advanced DNS settings look like this [68k.org], then just uncheck that box.

    • The real problem is that the default for win2k and winXP is to have that box checked. So anybody who is running win2k and winXP and doesn't have any idea what a dynamic DNS update is (which would probably be the vast majority), is sending these updates. My dynamic DNS provider (dyndns.org -- they dont use RFC2136 to dynamically update) has been sending mails telling its members to turn this option off for over a year now because of all the unnecessary traffic it causes.
  • I wonder who copied whose code?
    • The TCP/IP is very, very different on those two different OSes.

      Windows, IIRC, uses sockets. Mac OS 9 uses streams (although Mac OS X uses sockets). It's very unlikely that someone stole someone else's TCP/IP code, as much as I would like to blame Microsoft for stealing code...
    • by ckd ( 72611 ) on Sunday April 21, 2002 @12:19PM (#3383520) Homepage
      I wonder who copied whose code?

      It's not the same bug. Windows, by default, is trying to put its name into the MS Active Directory stuff, which is implemented using Dynamic DNS. The Mac OS 9 systems only try to do this if you have either TCP/IP Personal File Sharing or Personal Web Sharing enabled--which both default to off...and even if you turn on File Sharing the TCP/IP connectivity defaults to off.

  • by weave ( 48069 ) on Sunday April 21, 2002 @12:08PM (#3383487) Journal
    Thanks to stupid ad campaigns [wehavethewayout.com] and Microsoft saying that Windows servers are easy to administer and don't require expensive experts, it causes the worth of Microsoft Sys Admins everywhere to be cheapened. As someone who administers Microsoft servers, it pisses me off enough that my bosses don't understand the level of intelligence required to properly administer large systems. Now I have Microsoft saying to the top Chiefs in orgs basically that you can get your Microsoft sys admins much cheaper than Unix admins.

    Gee, thanks a lot.

    So you get what you pay for. You drive down the perceived value of a Microsoft sys admin and you fill these positions with poorly trained or MCSE certified test takers with no real grasp of the larger issues involving administer *any* IT site.

    Any competent sys admin would ensure crap like this doesn't happen, no matter what the OS is.

    And if the gap in pay and value between Unix and Windows sys admins is widened, who in their right mind coming out of a CS degree in college (not some fly-by-night certification course) is going to want to use their training to specialize in the market that pays the least?

    • This specific problem isn't about wether M$ admins are good, bad, untrained, uninformed or if wether they are Gods(tm). This is a completely non-M$ issue.

      However, it looks bad for us who build and maintain networks and their security (or inherent lack thereof).

      Proper design is to have two or more DNS proxies in a DMZ (or better yet, two different DMZs facing two different ISPs), and they relay any proper queries, never let an internal client have direct access out in the wild.

      Hiding all kinds of cruft beind NAT'ing gateways only hides design problems and exports your bad descision to anyone who might be in your path on the Net.

      ttfn,
      A
      • This specific problem isn't about wether M$ admins are good, bad, untrained, uninformed or if wether they are Gods(tm). This is a completely non-M$ issue.

        The central issue is having something switched on by default when it might be better defaulting to off. This is certainly to some extent a Microsoft issue, simply because Microsoft are notorious for packing in "features" which are rarely needed, but which default to being enabled.
    • by Anonymous Coward on Sunday April 21, 2002 @01:08PM (#3383656)
      > So you get what you pay for. You drive down the perceived value of a Microsoft sys adm

      Unfortunately, your case doesn't hold so much water.

      Back in the day, pro-MS admins pushed Windows when it was obviously a poor choice. You (plural) won, your political agenda cost any number of people trying to do good work stature in their careers, you toppled competetors, and your favorite OS "won". You collectively fought that battle, actually more a multitude of personal power-play agendas, blindly, and at a great cost to very many people. Now, it's clear to a bazillion wannabes what game they have to play - Windows.

      Your market is saturating, and your salaries are being adjusted to match. Next time, be more careful when you (again, collectively) foul mouth competing technologies in which you have no knowlege.

      Competent admins, in any OS, are fixed at maybe 10% of all admins available. Economics are based on supply and demand, not, ever, "getting what you pay for". When there are 2 people for every 1 job, you can expect lower pay no matter how good those 2 people are.

      > who is going to want to use their training to specialize in the market that pays the least

      Good question. The Monopoly lives, so it is now (by definition) the only game in town. The only competitor apparent is "Free Software", and that pays even less.

      Having done a number of TCO studies in my time, the pro-MS types that fought to advance their power base by pushing MS, only shunted administrative dollars to MS. Admin cost of *NIX are higher, but not so much so as the costs shunted to MS license fees.

      So, typical 10000 person Corp paid upwards of US $20 million to upgrade to W2K. That's alot of dollars that are no longer available to admins like you (singular).

      Not to be so hard on you... Computers are by their very design intended to capture "improvment" thorough automation, and retain that automation for the express purpose of permenantly "disposing" of the entire related (paid) labor force. Administration is one area that can be vastly "improved" using automation. If we look at "appliances" we see they can, in fact, be improved to require nearly zero admin. Sooner, or later, they will reach that goal and render their keepers redundant.

      Computers only need "one good soul" to carefully explain to them "how it's done". After that, a paid labor force is no longer needed to accomplish that goal. Today's IT "market" is based almost exclusively on the inefficencies of its youth. But, markets are designed to eliminate inefficencies as quickly as posssilbe, and your dwindling salary is a manifestation of them doing so.

      So, getting into computers is NOT such a wise career choice for people of college age. The number of "computer people" needed will be falling dramatically over the next decade. Good money now, but there just isn't the 40 year horizon one needs to call it a career.
      • Computers only need "one good soul" to carefully explain to them "how it's done". After that, a paid labor force is no longer needed to accomplish that goal.

        ... unless they run a Microsoft OS. Thanks to a security hole every week being patched and the cowardice of the people I work for to make a bold switch away from Windows, my job security is all but assured...

        I feel like a high-tech janitor. I just get to clean up shit all day long... :-(

      • Truthfully, I'm surprised that the career of computer programmer has lasted as long as it has. (N.B.: I didn't say sys admin.)

        OTOH, the job has changed significantly in that time frame. I attribute it's longevity to the slowdown produced by the MS monopoly. (And, to an extent, I'm a bit grateful, in a guilty kind of way.) VisiCalc was the handwriting on the wall.

        However, this has just meant that the activity has shifted to a higher level. Now languages are expected to contain things like GUI building toolkits, or even full GUI builders. (Glade is an example here. It's relatively easy to add the ability to read the Glade XML file to a language.) N.B.: A language here is including not only the core features, but also the default libraries (e.g., Swing or AWT).

        I am less aware of the trends in system administration, but I assume that the same path is being followed. The early tools are clearly sub-optimal, but as time goes on they improve. They'd better. The ones that don't will fail to reproduce successfully.

        System administrators need to adapt to the changing environment. So do programmers. Both paths have a finite duration. (I.e., when computers start to manifest "common sense" the handwriting will be on the wall. Bloat be dammed!)

        Once upon a time I did a forecast of future employment trends (as a kind of academic exercise). I wrote it up as a paper titled "Be a garbage man". This was based on expected duration of the professions that I considered. Management is in a peculiar position here. The formal decision making that the managers engage in is clearly something that they are incompetent at. But if there isn't a person on the top of the pyramid, many people get quite upset. Thus, ignoring for the minute the obvious advantage a manager at the top has toward job presentation, human nature seems to ensure that the top of the pyramid will be a person. Possibly a figurehead (one can hope?), but a person.

        If one includes political considerations this whole projection thing becomes a lot more complex. And unmanageable. But notice that whenever political considerations enter the technical folk tend to get the short end of the stick (because they don't pay enough attention). This means you!

        Don't expect any job that you take to last for 20-40 years. At least not without evolving into something you wouldn't have recognized at the beginning. Any job.
      • I agree. That's why i studied economics but work in the IT field :) ... anyway i think your conclusion is a bit exagerated. My advice would be as follows:

        If you are basing your future income in learning Windows administration, you'll be definetly out of luck, because it has no permanent value. It will change all time, automated, "asimilated". You'll be relearning your basic skills every 5 years, and everything else you know will be "history".

        On the other hand, if you learn what "persists" through time (like programming or knowing CAD basics, or generic databasing skills) then you will be able to focus on problem solving in hundreds of areas. If you combine these skills with that of an unrelated career which is likely to benefit from computing and comunications (internet), then best of both worlds.

        My opinion though. I may also be the case where for some reason unknown to me, things turn very different with HUGE specialization and very narrow scope of view for each individual.
      • The number of "computer people" needed will be falling dramatically over the next decade

        Hmm. I still remember hearing fifteen-eighteen years ago that in five years programmers would no longer be needed, the user would be able to do all the programming by using a "smart" program generator in an "interview" process.

        Well, I don't see programs being written by programs very often, and there are still quite a few programmers around. Even many with (whisper it) jobs. Powerful systems are flexable systems, and flexable systems are not simple. There will always be a growing need for "computer people". We can argue the curve, but it will always increase, not decrease, and the job will get harder, not easier. Just my .02 worth.

  • MS-DOS (Score:5, Funny)

    by sarcast ( 515179 ) on Sunday April 21, 2002 @12:09PM (#3383488)
    Hasn't MS had this around for a while now?

    They even called it MS-DOS...oh wait, that was Disk Operating System...nevermind.
  • putting this under the microsoft headline, i mean, i know you don't like them, but it's hardly fair to them, apple is doing it too! hatred is only successful if you annihalate them without being partisan.....
    • Re:What's with... (Score:2, Interesting)

      by archen ( 447353 )
      Well it's more of an MS issue (even though OS9 is doing it too). With OS9 it's more like a special case, with Win2k it's a more of a problem because it's a default. Despite the fact that it's pathetically easy to fix, the problem will be actually getting PEOPLE to uncheck a box.
  • Solution (Score:2, Funny)

    by standards ( 461431 )
    Here's the solution:

    1. Upgrade to Mac OS X. It's so cool.
    2. People use W2k on the internet? Is that safe???
  • by bogie ( 31020 ) on Sunday April 21, 2002 @12:28PM (#3383547) Journal
    You know, I never understood why they did this as default. And I am also surprised it took this long for anyone to loudly complain. First thing I have always done when installing 2k/xp machines that don't need it is uncheck that option.

    MS clients should not attempt this unless they are on a 2k AD domain. This is also as someone pointed out a good reason to filter your outgoing traffic.

    It reminds me of when they had that check for "logon" enabled by default for ppp connections, when 90% of ISP's didn't support this.
  • Look out, I think this is an MS plot

    First flood the root servers (running bind), cause them to fail, and then claim that if they ran MS-DNS, this wouldn't be happening.
  • by ipsuid ( 568665 ) <ipsuid@yahoo.com> on Sunday April 21, 2002 @01:00PM (#3383635) Journal

    To quote from RFC1918:

    It is strongly recommended that routers which connect enterprises to external networks are set up with appropriate packet and routing filters at both ends of the link in order to prevent packet and routing information leakage. An enterprise should also filter any private networks from inbound routing information in order to protect itself from ambiguous routing situations which can occur if routes to the private address space point outside the enterprise.

    If you are connecting your internal LAN using a private address space (10/8, 172.16/12, or 192.168/16) you are obviously using a firewall or router configured with NAT.

    These need to be configured correctly for many different reasons, including the prevention of the effect mentioned in this article... Add null routes, or packet filter rules for any outgoing packets containing a destination falling in the RFC1918 address space. Also do the same for the incoming packets. By not doing this, you are flooding your upstream provider (in this case the root DNSs) with tons of bogus *(^@.

    A few years ago I was lead engineer for a wireless internet company. Our clients were provided with a raw connection, just as if they had gotten a T1. After doing a week long network audit shortly after starting there, I was amazed to find that over 80% of our customer base had internal configuration problems with their NAT setups. Sniffing on the network, I got to see everything from MS Browse messages, DHCP requests, Netware "burbs", and tons of other stuff that should have never left their LANs.

    I finally ended up installing firewalls at each POP site, just to dump out the extra junk... Our network speed increased by over 20% just blocking this nonsense at the POP (tower site) and keeping it from coming over our wireless backbone connections... On a typical 16MB/s link that's over 3MB/s of bandwidth we saved.

    • You might be using RFC1918 space because you're using NAT, but there are other reasons and other ways to configure firewalls. The important reason is that you aren't getting your IP address space from your ISP, so you're doing the right thing rather than picking random numbers that belong to somebody else. You might be using a proxy firewall in a DMZ to fetch web pages and handle email instead of using NAT, and you can implement it relatively simply even without the proper router filters :-)

      Of course, ISPs should be filtering out packets in RFC1918 space, and their DNSs should be managing the requests rather than bugging the root servers with them.

  • Who do you want to flood today?
  • by mrwilsox ( 174578 ) on Sunday April 21, 2002 @02:00PM (#3383796)
    This problem, among with many, many others, was described in a CAIDA paper, "DNS Measurements at a Root Server." They basically ran TCPDump on root server F, and analyzed the traffic. An amazing number of invalid requests are sent all the time. It really shows how important it is for network admins to correctly set up their name services, but it also identifies problems caused by bugs in software. Very interesting read: http://www.caida.org/outreach/papers/2001/DNSMeasR oot/ [caida.org]
  • by Anonymous Coward
    CmdrTaco, this news article has six links, but
    only of them actually relates directly to this
    particular piece of news. Please make it
    more obvious which one is correct -- I'm tired
    of having to move the mouse over each one and
    see what the address is in order to try to figure
    out which link actually gives me the news.

    (please mod this up so people see it! this is
    becoming a big problem on slashdot. and this is
    anonymous, so it's not karma whoring)
  • Frequency (Score:4, Funny)

    by rant-mode-on ( 512772 ) on Sunday April 21, 2002 @03:01PM (#3383949) Homepage
    How often does Win2K register these ip addresses? Is it once an hour or so, or is there really a million win2k boxes being rebooted every hour?
  • Not to be making ms look better, but to give some people a way to fix it. http://support.microsoft.com/default.aspx?scid=kb; en-us;Q259922 [microsoft.com]
  • That's *Mac*OS 9 (Score:3, Insightful)

    by Paladeen ( 8688 ) on Sunday April 21, 2002 @06:49PM (#3384595)
    What is it with people and writing MAC instead of Mac?

    Mac is short for Macintosh, it's not a bleeding acronym! I can put up with it when it comes to ignorant posters, but seriously, shouldn't the Slashdot editors know better?

  • is here [cctec.com].

    It's funny to see a ten megabyte logfile produced every seven minutes *SLAP* woops. It's /not/ funny seeing a ten megabyte logfile produced every seven minutes. I wonder what they use for logfile analyses, I think it's getting more information than it's able to process.

    Edwin
  • I am an administrator for some IP space assigned but not ever routed. Several years ago, I was wondering where the hell all my bandwidth was going and found a lot of it was for DNS traffic trying to resolve IPs in that space. This was very odd, considering that it wasn't routed. These were at the rate of about 10 per second per IP address, and there were about 80 addresses two servers were querying for, for a total of 1600 requests per second. Now, there was no DNS server running on the host that these requests were going to so they were send port unreachable messages.

    Evidently what was going on was this large corporation was using MY IP space internally, but they weren't making their DNS servers authoritative for it, so the DNS servers went to the Internet (and to me) for resolution. Something somewhere was configured wrong and so they retried constantly.

    I firewalled these DNS servers out, but not before I composed email to the whois contact at the big corporation telling them to fix this stuff. They ignored me (yes I made sure their SMTP sending host was not blocked). Firewalling didn't fix the problem, only kept my server from sending port unreachable messages. The queries from the big stupid corporation's network were only getting worse. I was getting really pissed off.

    So I put up a DNS server up on that host, and made entries for every single IP (I was using bind, which is too stupid to have default responses). And I had fun, with obscene and abusive DNS names for every host, and forward resolution to match (in a silly domain also routed to the same dns server) -- and the highest possible TTL! Problem solved!

    The funny thing is that this staid corporation was now seeing all sorts of nasty names on their internal servers...BAH HA HA.

    The abuse stopped. Hopefully, someone was fired. Now we know that they will never attack me again in this way: you see, that abusive network belonged to Enron :)

    I actually let them off the hook easily. I had, at this point, control over data being returned to servers well firewalled away. Servers that probably had ancient resolvers that had buffer overflows in their DNS resolvers. High level servers that could have been r00ted straight through the firewall.

    moral of the story: don't leave dns work to weenies. You may be surprised at the results.
  • We (uconn.edu) detected this either last year or the year before with misconfigured windows clients (typically win2k AS where someone left the DNS service running with a default configuration).

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...