Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet

DOS Attacks On DNS Provider 224

Greedo writes "Seems like UltraDNS was hit with a denial of service attack this weekend. Since these are the guys who are supposed to be running the .ORG DNS, and in light of recent attacks on the gTLD roots, attacks against DNS servers should be treated very seriously. What kind of protection can be had? What happens when an attack like this brings down an entire TLD? Do you want to give control of an entire gTLD to one organization? Read a follow-up discussion on comp.protoocols.dns.std."
This discussion has been archived. No new comments can be posted.

DOS Attacks On DNS Provider

Comments Filter:
  • by 10Ghz ( 453478 ) on Monday November 25, 2002 @01:18PM (#4752179)
    I mean, isn't that a bit counterproductive?

    "Yes, I brought the entire DNS-system crashing down! I'm l337! Now, all I have to do is to go online and brag about my exploits... Hmmm... There seems to be something wrong with my net-connection..."

    • by Anonymous Coward
      if thery're that 1337, they'll know all their favorite webpages by ip address.
    • by greechneb ( 574646 ) on Monday November 25, 2002 @01:24PM (#4752248) Journal
      Not when you are trying to make a political statement. I don't know if anyone has claimed responsibility for this yet, or if anyone will. But it would be a great way to scare the general public. It won't necessarily be as terrifying as hijacking planes, but it can spread some fear into many people. (mainly IT types)

      But as the world becomes more dependant on the internet, expect more attacks to resemble this one. Take down the infrastructure, and watch the rest tumble without it.

      Plus you don't have to commit suicide to terrorize the public. - Of course that means no virgins for you by dying in a holy war...
      • Or even more likely, IMHO, if you were a competitor of UltraDNS.

        So the question to ask is, "who would benefit from the demise of UltraDNS?"
      • by Anonymous Coward
        My employer, apparently, has expected something like this to occur. Starting last summer, we have been modifying all of the unix hosts on the network to hard-code in the locations of the important hosts in the network: /etc/hosts now has the mailservers, webservers, etc, for all of the local network.

        The rationale behind this is simple: the dns boxes get dumb quite quickly when they lose their upstream connection. Once this happens, the dns for everything starts to fail, and even the internal hosts start having problems communicating. By using /etc/hosts and caching nameservers on all the hosts, we can delay (if not prevent) the stupidity that comes from the upstream dns being unreachable.
        • by Desert Raven ( 52125 ) on Monday November 25, 2002 @02:55PM (#4752810)
          The rationale behind this is simple: the dns boxes get dumb quite quickly when they lose their upstream connection. Once this happens, the dns for everything starts to fail, and even the internal hosts start having problems communicating.

          I'd say it's your DNS administrators that are dumb. I've been maintaining DNS systems for years, and I've never had a DNS server so much as hesitate to serve authoritative addresses, no matter what was happening to the upstream connection.
      • by Blkdeath ( 530393 ) on Monday November 25, 2002 @01:55PM (#4752468) Homepage
        But it would be a great way to scare the general public. It won't necessarily be as terrifying as hijacking planes, but it can spread some fear into many people. (mainly IT types)
        Actually, the last DoS attack on the root nameservers sucked, but it didn't frighten IT people. The only people things like this frighten are Average Joe Consumer types who don't really understand how these things work. For them, the "web" is the "Internet", and anything that affects "the web" could bring down the whole Internet (as if it's just a few computers in a lab somewhere that can be shut down like shutting off a light switch).

        The DNS system was designed for redundancy; if it can withstand a direct nuclear attack on 60% of its facilities (vis; 6-7 of the root servers), it can withstand a DoS attack. Considering the upstream providers of each of the root servers are responsive enough to throttle the traffic to a more reasonable level, and the caching, heirarchal nature of the DNS system (except for mickey-mouse systems who query the root nameservers only with no fallback support), it would take days to notice an outage. In that time, the root servers could set up spare boxes and have the system back up and running with relatively minimal disruption.

        To truly affect the operation of "the internet" as a whole, a DDoS attack would have to be sustained for days on end.

        • Nukes and Freenet (Score:5, Insightful)

          by 0x0d0a ( 568518 ) on Monday November 25, 2002 @02:22PM (#4752632) Journal
          For them, the "web" is the "Internet", and anything that affects "the web" could bring down the whole Internet

          Just one thought -- does Freenet use DNS at all? I *think* it doesn't. Because if not, it provides an existing, easy-to-migrate-to solution in case of such a catastrophic event. Just kick over to Freenet, no DNS required.

          The DNS system...can withstand a direct nuclear attack on 60% of its facilities

          As opposed to, say, those pesky indirect nuclear attacks? :-)
          • Freenet currently uses DNS for nodes configured to do so (namely dynamic DNS types). But, with recent discussion on freenet-devl, either address resolution keys will be implemented (meaning DNS-like resolution in Freenet) or IP address discovery will be integrated into the announcement protocol, negating the need for DNS either way.

            So, bottom line is: Freenet relies on DNS some of the time right now, but will not by the 0.5.1 release which is due shortly. In the case of DNS failure, however, the current infrastructure would still work -- heck, Freenet 0.3 would still work. (Sorta...)
            • Get one of the Freenet guys (or, if an EFF guy is willing to help out again, one of them) to point out that Freenet is the *ideal* protection against terrorist attacks on the information infrastructure of the United States.

              Consider all the "security" grants that are being thrown left and right at companies. They're lapping up all those tax dollars in the form of goverment contracts. If Freenet can grab just one, that would fund development for a long, long time. Lots of improvements, and I'd have a hard time imagining a more worthy cause than a more robust, secure, attack-resistant, private system that makes for more efficient transfers over the network.

              The overwhelming majority of my university's CS research funding comes from the Department of Defense. Freenet couldn't snag just a few of that flood of dollars going to organizations aroudn the country?
              • Yes, the Freenet crew is well aware that their project can and will survive the eventual massive infrastructure failure. It's a fully distributed, highly adaptive network that's not tethered to any method of communication -- there's experiments with FNP (Freenet Native Protocol) over ham radio, for instance. And, of course, you could always light up private fiber or communicate via Iridium or some other satellite network.

                Unfortunately, Freenet is currently being used by a large number of child pornographers and could also easily be used (if it's not already) by people opposed to the DoD, so they would much rather not attract attention from the government...
          • Just kick over to Freenet, no DNS required.

            Where am I gonna download a client without DNS? ;-)
      • by Idarubicin ( 579475 ) on Monday November 25, 2002 @03:37PM (#4753125) Journal
        Not when you are trying to make a political statement. I don't know if anyone has claimed responsibility for this yet, or if anyone will. But it would be a great way to scare the general public. It won't necessarily be as terrifying as hijacking planes, but it can spread some fear into many people. (mainly IT types)

        Nobody has yet claimed responsibility. Makes it sound kind of noble, doesn't it? What nobody has yet done is admitted guilt. I have always taken extreme exception to the media's convention that terrorists and criminals claim responsibility for murder. It's not a prize. Confessed to slaughter or declared lack of conscience or asserted no concern for fellow human beings might be more appropriate. Criminals shouln't be allowed--or worse, invited--to claim responsibility, only admit guilt.

        • While i agree with your sentiment, "claimed responsibility" is the most accurate phrase. Law enforcement agencies routinely deal with fasle confessions to high-profile crimes. Someone claiming that they are responsible is not the same thing as them actually confessing to being guilty, because for all we know they AREN'T guilty.

          it's not just a semantic or legal issue, the simple truth is that 45 people can't all be guilty of a shooting, but 45 people can all claim responsibility, so that's all any reporter could honestly say.
    • by sphealey ( 2855 ) on Monday November 25, 2002 @01:26PM (#4752273)
      You are assuming that the specific attacks on the DNS servers are being carried out by kids and "young dudes" working by themselves for the thrill of it.

      Whereas these attacks, as well as some of the worms that have surfaced recently, strike me more as testing of new techniques and probing of defenses by an organized group that is working on techniques to cause widespread disruption.

      sPh

      • More to the point, we should be welcoming this kind of attack (you know what I mean), if it shows that there is a weakness in the way that a vital component of the internet works, then knowing about it early means that solutions can be fielded and tested to secure the internet against these attacks.

        I am very glad that this kind of attack is being discussed in the open; rather than being hidden from public view. Much better that it discussed now rather than after somebody attempts to render the internet useless.
      • by curtisk ( 191737 ) on Monday November 25, 2002 @02:08PM (#4752543) Homepage Journal
        well said....ppl automatically jump to the "it's just a bunch of script-kiddies" mentality....there may a HELL of a wake-up call some day....
        • Yep, the Weekly World News [weeklyworldnews.com], home of Bat Boy and "Iraqi Submarines Prowling Lake Michigan", has a giant headline in the issue I just saw at the checkout stand: TERRORIST PLOT TO BLOW UP INTERNET ON 1-11!" [weeklyworldnews.com]

          The subheads are:
          * Computer virus will destroy US economy!
          * The US Military will be paralyzed!
          * Electricity, food and water supplies vanish!

          Clearly, we're ignoring these attacks at our own peril, when as technical a publication as the Weekly World News has picked up the story.

          (Back to reality, I literally burst out laughing and almost dropped my Mountain Dew when I saw that headline. Blow up "The Internet". Sounds like my daughter's friends... they come over and ask if her computer "has the Internet on it". No, it doesn't, but it has *access* to the Internet. "Oh, you mean AOL?" Grrr...)
          • That's a good one!

            But I especially like this part:
            "Iraqi Submarines Prowling Lake Michigan"
            Lake Michigan is of course so thick with Coast Guard (and Chicago Fire Dept, and Milwaukee Fire Dept etc.) helicopers and ships rescuing newbie and ocean sailors who think that [lake] == [easy sailing] that a submarine would be probably be run into the bottom in a matter of minutes!

            sPh

      • Now the skript kiddies are in with the government on the Conspiracy!
      • Whereas these attacks, as well as some of the worms that have surfaced recently, strike me more as testing of new techniques and probing of defenses by an organized group that is working on techniques to cause widespread disruption.

        Frightening as it is, I would agree with you. It seems that bragging rights would be much better for taking down amazon, yahoo, msn, or some other big name company. Attacks on infrastructure components which are not widely known to the public at large do strike me as a probe to see where the vulnerabilities of the network lie.

        After this period of explosive internet growth, we need to start addressing the vulnerabilies of the network. Whether the network can still withstand a massive physical attack or not, we know it is vulnerable to network attacks. I had a friend who used to work for MIT Lincoln Labs, he told me there were at least a dozen ways to take down the internet.

        • I had a friend who used to work for MIT Lincoln Labs, he told me there were at least a dozen ways to take down the internet.

          I had a friend who worked for Dunkin Dounuts that told me the same thing.

      • Why kids, why not organized adults with financial resources?
        The answer: WHY

        Kids.. it's fun, it's destructive, it's a sense of power.. the reasons go on. I shouldn't have to explain them.. go back, I'm sure many of you can understand.

        Adults.. and I'm not talking about big kids who never grew up here... need a finanical reason to do this. Could organized, intelligent hackers with financial backing to some serious damange to the internet? You better believe it. What would they have to gain? Not much. Prison. Hatred. Being labeled as terrorists, maybe killed.

        What are you going to do? Hold the Interent for ransom? I doubt it.

        That's why this stuff is chiefly done by kids, not grownups.


    • Well of course it's unproductive -- that's the hallmark of crackers, script kiddies and virus developers. These dregs of our society do these things just for the perverse pleasure of seeing how much havoc they can cause...

      These people are degenerates, delighting in the misery of others. Such are not worthy of life.
    • by 4of12 ( 97621 ) on Monday November 25, 2002 @01:33PM (#4752333) Homepage Journal

      isn't that a bit counterproductive?

      Absolutely.

      OTOH, if you were in the business of providing a spoofed name service, then this would be the first step in doing so.

      At any rate, it sure seems like access to a critical top level DNS should be filtered to a big white list of mirror machines, which could then handle general purpose inquiries.

      That, or increase the number of TLDs, but that's already an insolubly bad political problem.

      • At any rate, it sure seems like access to a critical top level DNS should be filtered to a big white list of mirror machines, which could then handle general purpose inquiries.
        Sorta like section 3.3.4 of RFC 2870 [faqs.org]?
        3.3.4 A 'hidden primary' server, which only allows access by the
        authorized secondary root servers, MAY be used.
        Besides which, a lot of the beefy top-level DNS servers are actually a bunch of identical servers behind some load balancing solution, so this makes a whole lot of sense.
        • 4of12's suggestion [slashdot.org] for whitelisting is different from the RFC2870 advice. The RFC essentially permits the machines in root-servers.org to have a hidden master, but it doesn't apply to non-rootservers, such as the DNS servers at big ISPs, which is where most people get their DNS from. In fact, it forbids root-zone transfers from non-rootserver machines, though it permits the rootservers to run an FTP mirror for outsider downloads.


          4of12's suggestion would let the rootservers run a server that's only accessible from known (and presumably important) addresses, such as the DNS servers for the big ISPs. That would take care of the most important uses of DNS, since most people get their DNS queries answered by their ISP's servers, either from cache or from recursive queries. Letting the big ISPs do zone transfers from a protected net would preserve that. (Without zone transfers, an obvious attack is for the zombies to look for bogus000001.com, bogus000002.com, etc.)


          Beyond that, DNS queries and zone transfers aren't the only way to send the information around. DNS A-record data compresses well (Unfortunately, DNSSEC data doesn't, and it's much bulkier.) And everybody wants the same data, so multicasting can be an efficient way to transmit it (using your favorite reliable-multicast application.) A back-of-the-envelope guess is that the dot-com namespace would compress to somewhere between 100-300MB, which would take 10-30kbps to transmit it in a day - and most of it has a TTL that's much longer, so you could handle it efficiently with incremental updates. Another alternative to multicast would be a peer-to-peer app that's designed for handling big files, like BitTorrent [bitconjurer.org]. (BitTorrent's designed more for static content rather than dynamic, so you'd need some file naming scheme for fetching today's version.)

      • At any rate, it sure seems like access to a critical top level DNS should be filtered to a big white list of mirror machines, which could then handle general purpose inquiries.

        Does not actually help at all. Basically there is no value to the dot unless the TLDs under it are also up. If someone can take out the root they can take out dotCOM, dotNET and probably anything else they choose.

        The major TLDs are replicated many times with very sophisticated and comprehensive setups that are considerably more robust than the various ad hoc proposals being made to replace them. Bernstein's suggestion of using USENET being a particularly clueless example. In the first place USENET is not even reachable as a general purpose infrastructure, secondly the architecture is exceptionally vulnerable to DoS. One compromised node could bring down the whole USENET. The only reason that people don't attack it is that it simply isn't important enough, use it to distrivbute the root zone and you make it a target.

        What we should really do is can ICANN and simply open up the root zone for registrations at a reasonable rate (i.e. $500, not $50,000). The dotCOM infrastructure can easily be scaled to handle the load. The registration fee would allow for up front verification of trademark claims. There could be a rational complaints procedure based on prior review, registrations in the TLD would be subject to a 12 month public comment & objection period before being activated. Failure to complain during that comment period would result in a strong presumption in favor of the registrant. Registration of a TLD would automatically block further registrations in the other TLD zones at the option of the cc operators.

    • It's not a problem (Score:5, Insightful)

      by Ted_Green ( 205549 ) on Monday November 25, 2002 @01:34PM (#4752338)
      If you're using an alternative root server.

      And in all honesty, I would say that if the "offical" root servers can't protect themselves, they really have no business being root servers (TLD or otherwise) in the first place.

      • Did they fail to protect themselves? Because as with the previous DNS attacks, I was using the Internet as ususal throughout the whole thing and never even noticed.

        Raising the question, how many of us actually noticed this before reading about it?

        • I don't know much about the UltraDNS stuff.. as for the other thing:

          7 or the 13 servers went down for a bit. And because of caching and redundancy this wasn't really a notticable thing.
          It might be, however if a million windows boxes were comenced such an attack over days.

          When it comes right down to it, I think the root operators are doing a pretty good job all things considered. (they're allready approaching ways in which to protect themselves)

          However, if this had been an attack on verisign's .com zone file then I suspect a rather large number of users would have had experienced some rather large problems.

          Their was a lot of force behind the blow, but the punch wasn't aimed well.

          What's bothersome is that if this was used by somone who knew what they were doing. (That's assuming it was an attack and not a warning, or a test of some sort)

      • How exactly do you protect against an attack whose "payload" is sheer data volume? Make sure your pipe is bigger than the aggregate bandwidth available to every previously compromised host on the internet? How feasible is that? Aside from that, the attack wasn't even against a root server, it was against a DNS provider.

        maru
  • I thought ISOC was about to run the .org TLD in cooperation with afilias? I've never heard about UltraDNS before - do you have any further links about UltraDNS managing .org?

    Thank you very much!
    • Re:ISOC? (Score:4, Informative)

      by Anonymous Coward on Monday November 25, 2002 @01:36PM (#4752347)
      Afilias uses UltraDNS for their DNS Infrastructure. It was in the proposal. Here's the link to the UltraDNS press release.

      http://www.ultradns.com/news/021028.html
  • by Streiff ( 34269 ) on Monday November 25, 2002 @01:19PM (#4752192)
    Good thing MS is killing DOS in december. It's way
    too violent these days.

  • by Anonymous Coward on Monday November 25, 2002 @01:20PM (#4752199)
    It's not that big of a deal, since most people's DNS requests never reach the TLD servers. Instead they're handled by a mirror at a lower point on the tree.

    But, still, we should catch these DOSers and throw them into a federal pound-me-in-the-ass prison.

    Damned arab terrorist scum! Down with Saudi Arabia!!!
    • It's not that big of a deal, since most people's DNS requests never reach the TLD servers. Instead they're handled by a mirror at a lower point on the tree.

      The most recent attack wasn't on the root nameservers, it was on UltraDNS, which is a large-scale commercial DNS hosting provider. A lot of big sites rely on their DNS service
  • DOS Attacks On DNS Provider
    And here I thought DOS wasn't supported [slashdot.org] any more. Go fig.
  • .ORG TLD... (Score:5, Funny)

    by AyeRoxor! ( 471669 ) on Monday November 25, 2002 @01:20PM (#4752202) Journal
    Thought you would find this funny:

    In IE, I entered ORG and hit enter, just to see what would happen. Although highly unlikely, they could arrange some page there. Instead, MS search brough up a list of possible alternatives. Number one on the list?

    Mozilla.org

    Thanks, Bill :)
  • I was wondering why /. seemed a bit sluggish...
  • by fo0bar ( 261207 ) on Monday November 25, 2002 @01:21PM (#4752208)
    The ad at the top of the /. homepage was for UltraDNS as I was reading this story. Any publicity is good publicity, I guess...
  • Guardent [guardent.com] is making a lot of noise about this sort of thing. Conspiracy theorists unite!
  • Very surprising (Score:5, Informative)

    by ekrout ( 139379 ) on Monday November 25, 2002 @01:23PM (#4752240) Journal
    I have seen the UltraDNS ads here at Slashdot and thusly decided to read up on their techniques as well.

    Basically, they urge large important Web sites to outsource its DNS needs to another company (them). Before this DOS attack on their servers, they provided near perfect stability, security, and performance. If I recall correctly, Hotmail [ultradns.com], Forbes [ultradns.com], and Oracle [ultradns.com] have already used the services of UltraDNS.

    It's a shame that such a wonderful resource (the Internet) is so often abused by a few rowdy hackers and trolls [slashdot.org].

    Here is a whitepaper [ultradns.com] that describes their services in depth and explains the reasons for outsourcing one's DNS needs.
    • Re:Very surprising (Score:3, Insightful)

      by swb ( 14022 )
      I never quite got the whole outsourced DNS thing.

      Is it a question of just providing global geographic and network diversity for a site's nameservice, or is there something here that I'm missing?

      If I was example.com and I had an office in two locations with a T1 in each, NY and LA and I had three NS records, ns-la.exmaple.com, ns-ny.example.com and ns.myisp.com what are they going to offer me that I don't already have?

      Proprietary firewall technology? OC-192s to 10 providers? Some home-brewed nameserver software more immune to hack attacks? Some kind of latency measure that replies with better A records?

      They're all nice, but they're all expensive, although maybe I'm missing out on something I should have.
      • The offer pretty much what you listed... and given the infrequent, but bloody impossible to track down "address found, but no resource of requested type available" I'm getting these days from securityfocus mailing lists, even despite a spread setup like you mention, I'm starting to think hard about it. Evidently -something- isn't right with my local setup, but I'll be damned if I can find it.
      • Re:Very surprising (Score:5, Informative)

        by Johannes ( 33283 ) on Monday November 25, 2002 @02:40PM (#4752724)
        Disclaimer: I used to work at UltraDNS until a couple of months ago when I was laid off.

        The service provides a couple of advantages:

        Better latency. They use an anycast routing network which guarantees that a query to their DNS servers will be received and answered by the closest server based on the network topology. Even though there is only 2 published IP's for nameservers. There are some 16 servers scattered around the globe to answer on those IP's.

        Near real time database updates. They use an Oracle advanced replication network to get updates out to the other servers in near real time.

        Proprietary software. The only significant advantage here is that it's not BIND.

        All in all, it's about as good as DNS will get. Do you need it for your personal domain? Hardly. Do you need it for a popular domain like slashdot.org? Probably not.

        It works best for really large and really popular zones, like TLDs.

        However, it's still going to be better (albeit not as significantly) for your personal domain too.

        Anyway, bandwidth isn't really the issue with DNS. It's latency and availability.

        The problem with your example is that chances are, your DNS server in LA will be getting queries for Europe, which isn't all that ideal. Once again, is it that important? Not really.

        But it will work obviously.
      • What are you talking about? UltraDNS.com is as cheap as dirt. I would charge my clients more money to config BIND than UltraDNS.com would charge in a year. Easy choice.
  • by Anonymous Coward on Monday November 25, 2002 @01:27PM (#4752280)
    is the following line in my hosts

    66.35.250.150 slashdot.org :)
  • by martin ( 1336 ) <<maxsec> <at> <gmail.com>> on Monday November 25, 2002 @01:28PM (#4752294) Journal

    Seems this was as distrubuted DDoS (DDDOS - sounds like a stemmer:-), many people got this..

    http://www.merit.edu/mail.archives/nanog/msg0534 9. html

  • Since these are the guys who are supposed to be running the .ORG DNS, and in light of recent attacks on the gTLD roots, attacks against DNS servers should be treated very seriously.

    Should be? They are. The FBI and the Department of Homeland Security are already investigating this.
  • Progress? (Score:2, Interesting)

    I think the orignal concept of the web got lost somewhere. I was under the impression that the Internet itself was designed [by Al Gore :)] to not have a "control center." So that it could function even if most of it was destroyed. But now the internet has been altered into a network that relies on a few DNS servers. Why? So my bookmarks don't have to keep track of IPs? That seems silly. I am also pretty certain that my email address will cease to function without DNS servers as well. So without DNS I can neither access web pages nor email. This is somehow progress?
    • DNS Servers (Score:4, Informative)

      by sjanich ( 431789 ) on Monday November 25, 2002 @01:38PM (#4752368)
      It is more then just a few servers.

      Generally each "server" has multiple seperate internet connections. The server it self is usally a set of two or machines acting as one. The servers are distributed around the internet. They are not concentrated in one place eigther geographically, or network topographically.
    • Re:Progress? (Score:3, Insightful)

      by zmalone ( 542264 )
      I realize that this is probably a troll, but if you really are clueless, I guess I'll fill you in. DNS does not replace the IP system, it expands upon it. If the DNS heirarchy were to disappear there would be no negative effect upon the internet, you would just loose the ability to use symbollical names. If you really want to remove that "weak" link, your welcome to use IPs, and if the DNS fails, you can continue operating as normal. I personally link missing net access every once in a while is far less bothersome then memorizing IP addresses or adding them to my hosts file.
      • Re:Progress? (Score:2, Interesting)

        by Bizaff ( 443681 )
        I agree that DNS is not supposed to replace IP, but what I think registered_user was saying is that everyone's address book says person@host.name, not person@127.0.0.1. Losing the use of symbolic names IS disasterous. It won't stop you from getting where you know the IP, but how many IP's do people know off the top of their heads?

        If DNS goes away, how is that mail going to get routed? How will people browse all the other sites people only know by name? Sure, you can have an updated /etc/hosts, but I know I don't want to maintain one for every site I visit.

        Sure, you have the redundancy of secondary DNS servers.. but what if someone takes most of the root servers down, and compromises the others to start giving out the wrong IP's? Ok, this is a little contrived, but I see what registered_user is getting at. We ARE awfully dependent on DNS.

        I'm jus sayin!
      • Re:Progress? (Score:2, Interesting)

        by jafiwam ( 310805 )
        Smaller web sites tend to be multi-homed on the same IP, using the HTTP host-header to specify what virtual web to use for any given request.

        So using the IP of a smaller site is likely to get a "Default" install page for the web server software, or to the hosting company's own web site. (Using a http://###.###.###.### request to an IP is one of the tricks that can be used to track down who is hosting some site you don't like, spammers or whatever.)

        The only way to visit one of those without the DNS system would be to use a hosts file on the local machine so the HTTP header comes into the web server correctly. DNS servers are left out of the loop entirely in that case.

        For small web sites, "no DNS" means "not on the net". (Big web sites probably have only one IP, so the IP address would work just fine in a browser, but how much database driven stuff looks at the URL to make sense about what to do...)

        DNS and IP are complimentary system for allowing data transfer. DNS has a very different function; routing meaningful traffic (not just packets, but web sites and other services) to people, that sits over the IP stuff, which just cares about getting packets from one place to another.
  • Is it realistic? (Score:3, Interesting)

    by Itsik ( 191227 ) <demiguru-at-me.com> on Monday November 25, 2002 @01:37PM (#4752358) Homepage
    I truly question whether it is realistic to bring the entire system down. There are so many servers around the world that offer a redundant service to those servers that it would be hard to actually "feel" that the root DNS server is no longer available. Which gives whoever quite a bit of time to be able to bring the affected system back up.
  • by Alethes ( 533985 ) on Monday November 25, 2002 @01:37PM (#4752359)
    How badly can attacking the root DNS servers affect the Internet experience since DNS is so decentralized? If the root server is down, that doesn't prevent the thousands of immediate DNS servers from being able to resolve domain names for the users, right? It seems like it'd only be able to prevent the propogation of new domain names. What gives?
    • Not decentralized (Score:2, Informative)

      by meldir ( 571781 )
      DNS is decentralized, in the sense that no server holds all information, but servers only hold information for a certain part of the domain-space. However, *no server can cache all information*, and to answer queries, these servers must ask other servers. And to know which servers are authoritive for a certain domain, you'll have to ask the root servers. This makes DNS pretty centralized in the end. And vulnerable.
    • How badly can attacking the root DNS servers affect the Internet experience since DNS is so decentralized?

      DNS isn't really that decentralized. OK, you don't need access to the root zone itself that often. It's the big TLDs like .com and .org that are the big problem. And yes, if you have a good infrastructure it will be cached somewhere upstream. However, some proportion of these will time out if the DDOS is sustained for any length of time.

      For DHCP say, you refresh before the timeout, so there is a minimum downtime of your DHCP server before the clients lease times out altogether. AFAIK, for DNS when the TTL expires that's it; so some sites will start dropping out the cache as soon as authorative DNS becomes unavailable.
  • by Anonymous Coward
    not very nice to post the link to their site. Now not only they had to endure a DDoS ping flood attack, they'll have to deal with the ./ effect!

    artaxerxes
  • by Jugalator ( 259273 ) on Monday November 25, 2002 @02:04PM (#4752520) Journal
    Look at this [internettr...report.com], especially that huge packet loss spike at 11/24...

    Seems suspicious, although that site hasn't put up any news about it like they did with the major DNS attack a copule of weeks ago.
  • Dan Bernstein (Score:4, Insightful)

    by tuxlove ( 316502 ) on Monday November 25, 2002 @02:06PM (#4752529)
    Reading that Usenet thread was ugly. Dan Bernstein has the unsurpassed ability to present (often) good ideas while being a complete prick.

    Dan, you want people to take you more seriously, try being human once in a while. You don't need to prove just how damn intelligent you are by beating other people over the head with their own "ignorance". You might want to work on your own ignorance in the social skills department first.

    That said, transmitting the entire root zone over Usenet and other means sounds like a good suggestion. I hope you can start sounding like less of a lunatic so people will listen to the idea.
    • Re:Dan Bernstein (Score:4, Interesting)

      by SiliconEntity ( 448450 ) on Monday November 25, 2002 @03:37PM (#4753128)
      I met Bernstein briefly, and he seemed like a nice guy in person. He's relatively young, 30-ish, and soft spoken. But online he comes off as some kind of know-it-all curmudgeon.

      Personally I liked the suggestion in the Usenet thread to return expired DNS cache data when the authoritative servers are unreachable, at least as an option. 99% of the time when you can't do a host lookup, the old cached data would still be right. All the DNS purists hated the idea of using expired data, like it's unclean or something. But if it's all you've got, isn't it better to use old information than to give up on letting the net work at all?
    • Re:Dan Bernstein (Score:2, Insightful)

      by efflux ( 587195 )
      I'm not familiar with the person in question, but I know the attitude, and I agree whole-heartedly. It's made it so that I can't stand to use UseNet, no matter what the group. You *will* run into freaks like these, and there is no use in trying to present an argument or to extract an argument out of these people so that you can understand the issue at hand. These attitudes destroy academia and investigative thinking.

      I had even ran into an individual IRL who had this genius complex as he was trying to sell me on an Open Source project he was working on. He was so unbearable I don't want to work with.

      To people with such complexes, I suggest you have them read Nietzsche. He has a lot to say about "the cult of the genius". Though I disagree with him on many counts and feel he suffered from the same delusions he denounced, I have to agree with his reasoning in this matter.

      He may have mentioned this in serveral of his writings, but in particular, I am referencing _Human, all too Human_.
    • Heh. Bernstein is cool. Although he uses dubious (IMHO) code practises, such as having entire functions with one-character names and all variables with one-character names, and calling _exit(), his code makes small executables (probably the lack of long debug symbols eh?) and doesnt have security holes.

      Also he's prepared to tell dicks that they are dicks - something that is unfortunately rare these days.
  • by jwdeff ( 629221 ) on Monday November 25, 2002 @02:10PM (#4752550) Homepage
    All ISP's should have access lists on their routers allowing traffic out only if the source address is within their network. Directed Broadcasts should be turned off to limit smurf [smurf.com] attacks [cert.org]. This itself would cut the problem ten fold.
    • I can think of situations where someone might have a slow link for upload (e.g. 56k modem on phone line) but a completely different link for faster downloading (e.g. satallite dish).
    • It's possible that the weird x.x.0.0 addresses were a programming bug (forgot to run a loop?), but my initial guess was that it was trying to trigger the old-style directed broadcasts (remember when all-zeros was the broadcast instead of all-ones?), guessing that many people have the sense to block all-ones directed broadcast.
  • Time for a new model (Score:5, Interesting)

    by laigle ( 614390 ) on Monday November 25, 2002 @02:24PM (#4752644)
    Given these attacks, maybe it's time to shift the DNS model to something more distributed. Say a P2P network of all the DNS servers, which would feature client side intelligent load balancing (ie it only queries past your ISP's DNS when it needs to). It wouldn't take a whole lot, since it only needs to be capable of a very minute series of transactions. You could throw in CRC codes and a verification system if people wanted to be extra paranoid about it.

    Of course, ultimately you have to have some sort of root server. But in a distributed model, they could be essentially insulated from DOS attacks, because they just need to get the master list out to a few systems for it to propagate all over. There could be a redundant distribution mechanism whereby the root servers send the list out through normal channels, but also send it to some randomly selected servers by phone call as a backup. At that stage hosing the root servers (or more accurately their connections, I doubt anyone is gonna ping one of those things to lockup) would not only be difficult and dangerous, but pointless. You cut off its connection via the internet, but the list still gets out and immediately spreads to so many DNS servers you couldn't possibly shut them all down, and you would have to shut down most of the world's DNS servers to have any impact on users.

    Ultimately it wouldn't change things too much, since we're already pretty insulated from these attacks. But it does have a nice "just in case" factor to prevent some megaworm or Y2k-style OS-pervasive glitch from knocking us on our butts. And it would take the wind out of the sails for a bunch of the script kiddies (and the odd genuine hacker) out there trying to crash the net, which is almost worht it in and of itself.
    • Say a P2P network of all the DNS servers, which would feature client side intelligent load balancing (ie it only queries past your ISP's DNS when it needs to).

      Set your nameserver to forward all your request to your ISP's DNS instead of having a .-hinted-zone.

      Of course, ultimately you have to have some sort of root server. But in a distributed model, they could be essentially insulated from DOS attacks, because they just need to get the master list out to a few systems for it to propagate all over.

      Isn't that what we have now?

  • by lazlo ( 15906 ) on Monday November 25, 2002 @02:29PM (#4752676) Homepage
    There is an elegant solution that seems tailor-made for this particular problem (i.e., massive bandwidth DDOS of a small number of servers serving a stateless udp-based service) It's called anycast, and it's being used successfully now. An excellent example of its use is the AS112 project [as112.net]

    Here's a quick overview I found: http://www.pch.net/documents/tutorials/ipv4-anycas t/ipv4-anycast.ppt [pch.net]

    Now if we can just get all or most of the root-servers and gtld-servers moved to anycast, then there should be at least minor performance gains, and fairly large stability/resilience-to-DOS gains.

  • Doh! (Score:5, Funny)

    by spruce ( 454842 ) on Monday November 25, 2002 @02:54PM (#4752802) Journal
    So as the battle weary sys admins from UltraDNS finally get back home from fighting a DDOS attack....

    Phone rings.

    "Bob, the web server is under attack again, and this one's coming from all around the globe. Game over man, game over."

    Slashdot's a bitch.
  • Do you want to give control of an entire gTLD to one organization?

    Hmm.. trolling for ICANN haters? I see no particular security problem with a central authority managing a TLD, provided that their backup servers are distributed widely in both the geographical and topological senses. We shouldn't confuse this particular issue with that of whether a central authority like ICANN should have the right to control who can and cannot create new TLD's.
  • is there any information on whether the DDOS attack on UltraDNS actually affected service?
    The UltraDNS infrastructure has 16 or so machines on the same IP number. So it's harder to hit all of them. And it's not BIND, so it may be harder to bring down. (not sure it matters - the root DDOS didn't crash BIND either).
    And of course UltraDNS is typically not serving all of the secondaries for a zone.
    If anyone has real info....

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...