Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

More Info on the October 2002 DNS Attacks 232

MondoMor writes "One of the guys who invented DNS, Paul Mockapetris, has written an article at ZDnet about the October '02 DNS attacks. Quoting the article: "Unlike most DDoS attacks, which fade away gradually, the October strike on the root servers stopped abruptly after about an hour, probably to make it harder for law enforcement to trace." Interesting stuff."
This discussion has been archived. No new comments can be posted.

More Info on the October 2002 DNS Attacks

Comments Filter:
  • by Quaoar ( 614366 ) on Saturday January 11, 2003 @04:35PM (#5063333)
    First they kill 3000 people...then they deny us the Internet for a COUPLE HOURS! This time...it's PERSONAL!
    • by CAIMLAS ( 41445 ) on Saturday January 11, 2003 @08:34PM (#5064481)
      Being as terrorists have some sort of political agenda, and these k1ddi3s that attacked the root servers did NOT, makes them non-terrorists. Terrorism requires a political agenda.

      A better description would be anarchists. Anarchy is lawlessness and disorder as a result of governmental failure (in this case, to set up a system where the root servers are safe, but not particularly so).

      But then,we can't say that, can we? Anarchy is popular here on slashdot.
  • Solution? (Score:4, Funny)

    by Brain$torm ( 639876 ) on Saturday January 11, 2003 @04:37PM (#5063339) Journal
    The solution would be just to get rid of the ping command ;)
    • My university deny's icmp packets altogether. It's annoying sometimes, but I can understand why they do it.
  • by pootypeople ( 212497 ) on Saturday January 11, 2003 @04:38PM (#5063347)
    As email viruses expanded from an original concept, their authors began to adapt to the strategies used both to catch them and to deal with their creations. As a result, newer viruses have been more damaging. The October attacks showed a greater level of sophistication solely because the people behind these types of attacks are aware of what's going on and pay attention in order to make them more successful. The scary part is that the longer people like this are able to elude law enforcement, the larger their attacks will eventually become. Each one is, in essence, a trial run for the next larger attack. Watching attacks like the ones that have plagued dal.net for a long time, it's easy to see how these attacks could end up causing serious problems (beyond the minor inconvenience of not being able to get to your favorite sites) in the near future.
    • by afay ( 301708 ) on Saturday January 11, 2003 @04:53PM (#5063423)
      Actually, the article says that the root DNS attacks weren't very sophisticated at all. They used simple ping flooding and apparently stopped abruptly after 1 hour (to allude law enforcement). Fortunately, to actually have an effect on a significant portion of the internet population, the attacks would have to have continued for much longer due to caching.

      I'm really curious how "The October attacks showed a greater level of sophistication" than past attacks? As far as I can tell the attacker just had a bunch of cracked boxes with decent pipes to the internet and started a ping -f on all of them.
  • by Malicious ( 567158 ) on Saturday January 11, 2003 @04:40PM (#5063365)
    Meanwhile, Theives broke into a local jewlery store, then left.

    Unfortunatley, the theives didn't wait for law enforcement officials to show up, making it much harder to identify them.

  • Dalnet DDOS Attacks (Score:5, Interesting)

    by mickwd ( 196449 ) on Saturday January 11, 2003 @04:46PM (#5063398)
    The Dalnet IRC network has been crippled for months due to continuing DDOS attacks. Now Dalnet is based on a small number of central IRC servers (20-30 I believe) so it isn't too far removed from the core DNS infrastructure (i.e. the root DNS servers).

    Why don't Dalnet and the FBI (or whoever) get together to solve a mutual problem ?

    Dalnet could get some much-needed help, and the FBI could get some much-needed experience into investigating this sort of attack. They would also be dealing with someone (or some people) who could move on to attacking bigger things.

    Also if they caught the attackers, they would get some useful publicity, some justification for an increased spend on cyber-deterrence, and the deterrent effect of having the perpetrators suitably punished - as well as putting a genuine menace behind bars.

    • by Anonymous Coward on Saturday January 11, 2003 @05:05PM (#5063492)
      It's virtually impossible to trace it back to the originator. First off, they are using slave machines, machines belonging to common people not aware their WinBlows system got infected with a trojan, just because they haven't paid attention to the latest security hole.

      M$ is just as much a part of the problem as well. With more and more cable, DSL and other "always on" connectivity available, more and more of these machines are vulnerable.

      Scanners out there can easily identify and infect 1000 home user's machines, and these attacks come from them. The actual perpetrator is long gone. All they do is momentarily log in and "fire it off", then they immediately log out, and they are gone.

      Tracing IPs back to the attacker is just going to identify the innocent machines or owners who are totally unaware of their activity until they either power down their machines or somehow discover it.

      • "Scanners out there can easily identify and infect 1000 home user's machines, and these attacks come from them. The actual perpetrator is long gone. All they do is momentarily log in and "fire it off", then they immediately log out, and they are gone."

        But an ISP (or some body such as the FBI) may be able to identify all the packets travelling to an infected machine on its network, and perhaps trace which machine is connecting to it to co-ordinate the attacks - or at least the first machine in a chain.

        Or perhaps other means of dealing with the problem could be investigated (routing protocols, or whatever). Also, the ISPs which allow outgoing source IP addresses to be spoofed could be identified. If spoofed source IP addresses become a huge problem to significant parts of the internet, those ISPs could be asked, pressurised (or legislated against) in order to stop this - if technically feasible (sorry, but I'm no networking expert).

        OK, people may not think it worth doing just to save a single IRC network, but it's not a problem that can be ignored for ever while it gets worse and worse (due to the reasons you give in your post) and becomes a threat to more and more of the internet.
        • OH its possible. But you will see more lazyness on it then you could even imagine. Most even have enough wiggle room in their contracts to enforce it. A decent router can log crap. It can look at the IP header. In fact it MUST look at it to route it.

          It is beyond me why the ISP's would even want one crap packet come out of their network. Its costing them money. Their upstream connection costs money...

          For some interesting numbers go take a look at MyNetWatchman [mynetwatchman.com] These dudes even TELL the ISP's that there is something wrong. But most just get ignored.

          Truth is most people could care less that their computer is doing something wrong. They just want a bit of email and to surf a bit. Hell most just want it to stay up long enough, and be a bit faster. Considering the 300 programs they are running out of the box.

          The only way I have ever been able to explain to a person what its about is the apartment analogy. A theif goes into an apartment building and rattles every doorknob. He finds one that opens. He then uses that apartment as a base to sneak around to rattle other doorknobs. Most people get very upset when I tell them someone is basicly trying to break into their house. The next words out of their mouths are usually 'who can I report this to?' All I can tell them is no one.
      • by nautical9 ( 469723 ) on Saturday January 11, 2003 @05:50PM (#5063737) Homepage
        Although tracing back to the actual attackers can be very difficult, it can still be done with enough investigation and willpower. For an amusing tale of how a popular (although not always loved) windows security guy did just that, go here [grc.com].

        He basically got his hands on one of the "zombie" trojans the DDoS'ers use, reverse engineered it to find out how it works (and which IRC servers it talks to to receive its commands), wrote his own to connect to said server and waited until the attackers personally logged in. It really is a good read.

        • For an amusing tale of how a popular (although not always loved) windows security guy did just that, go here. ["here" linked to GRC.com article]

          I hadn't read that guy's site in a while because it's too alarmist. But I read the linked GRC article and found roughly 5-15% useful text among all of that. The IRC log was priceless; ^^boss^^ was stupid if he was surprised someone could've figured that out how to locate and connect to his IRC server. (I'm not necessarily dissing Gibson with that stament, though; he's alarmist but is fairly knowledgable although he can sound fairly stupid at points, too.)

          What struck me is how much his articles read like Crocodile Hunter:

          CRIKEY!! I've been DDoS'ed by SCRIPT KIDDIES' WIN9x ZOMBIES!! Lucky for me they weren't Win2k or WinXP zombies or I'd be DEAD!!

          [Imagine the following text centered, large, bold and in a different color]

          Soon the proliferation Win2k and WinXP will allow make the world a far more dangerous place to live!


          etc., etc..

          I actually enjoy Crocodile Hunter, though.
    • by Martin Blank ( 154261 ) on Saturday January 11, 2003 @05:20PM (#5063569) Homepage Journal
      From RFC 2870 (Root Name Server Operational Requirements), section 2.3:

      At any time, each server MUST be able to handle a load of
      requests for root data which is three times the measured peak of
      such requests on the most loaded server in then current normal
      conditions. This is usually expressed in requests per second.
      This is intended to ensure continued operation of root services
      should two thirds of the servers be taken out of operation,
      whether by intent, accident, or malice.


      With 13 current servers, this means that 8-9 servers can be taken out at one time and have negligible impact on the world's DNS queries, assuming that the outage is at a peak time and the servers are being hit very hard. Practically speaking, the existing root servers are probably built even more toughly, so the remaining 4-5 servers can probably handle shorter outages (such as that mentioned in the article) without significant effort, and even if brought down to 2-3 could probably handle things with some difficulty.

      According to root-servers.org, the existing servers are fairly concentrated, with only those in Stockholm, London, and Tokyo not in the United States. Perhaps three more, with one maybe in South Korea, one in Australia, and one in North Africa or the Middle East (Cairo would be ideal to cover both) would be a viable option? I realize that the last is probably going to be questionable for some, given the censorship agendas often in place in the area, but it would help to make further attacks a little more difficult, as well as adding a little prestige and maybe tech investment to the area. Just an idea.

      As for Dalnet, why isn't the FBI involved? (I'm not aware of current happenings on the network, as I don't use it.)
  • What outage? (Score:2, Informative)

    by Synithium ( 515777 )
    Didn't even notice the outage, none of my customers or people browing my sites indicated that they noticed either. In the nature of multiple providers and the way the DNS structure works, it would take an awfully long time for a large number of people to notice anything.
    • DNS info is cached and times out in about a week, so if you had updated just before the attack, you wouldnt notice for a week.
      • DNS info is cached and times out in about a week, so if you had updated just before the attack, you wouldnt notice for a week.

        Doesn't that assume that you're only visiting sites that are already cached on your DNS server?
      • We have about 800 domains, and we crank the TTL down to 15 minutes. A week is horrid. In fact I've noticed that some ISPs, like AOL, override our TTL! The machine has no trouble at all handling it, and the bandwidth is less than a mail server. Ever since we started doing it, I have to admit it's been very nice not dealing with the complaints that no one can see a change yet- just take 15 minutes waiting to tell them that it's done! IE seems to have it's own caching system independant of the system IP cache, too. Actually I just made what could very well be a very incorrect presumption. Could one of you MCSEs please explain how a Windows box caches an IP in its head? Does it at all? And does Internet Explorer do something different than the rest of the system? I've been able to ping a website that had an IP change, and in IE still pull up the old site.
        • I am not an MCSE, and wouldnt admit it if I was.

          However, I do know that the Win2k and later series OSs from Microsoft do contain what is called "DNS Client". This client has the job of doing DNS caching. (And a bunch of other stuff I think.)

          Restarting the thing can be a quick way to do what would otherwise require a reboot.

          The Win98/ME/95 series stuff had a client too, but it couldnt be cleared without rebooting. Though I think it's timeout was not as long.

          So yes there is caching going on, one of the main reasons why my first question to my clients is "when did you last reboot?"
        • I've been able to ping a website that had an IP change, and in IE still pull up the old site.

          Almost all web browsers have caches. Usually, they work correctly. Sometimes they don't.

          Tools->Internet Options->Settings->Check for newer versions of stored pages: Every visit to the page
      • You might have a cache.
        Your upstream provider almost certainly has a cache.
        His upstream providers likely have caches.
        Their upstream providers likely have caches.
        Depending on the exact path taken, a name request might be erratic as to whether (and to what) it resolved.
        It would probably take a week for killing all the root servers to take down the internet, although some breakage would be noticeable after about 24-36 hours.
        Things working off of fixed ip addresses would continue to work.
        If intermediate caching DNS servers keep used stale addresses until a fresher valid address is known, a lot of the internet would keep on going indefinitely.
    • Re:What outage? (Score:2, Interesting)

      by Sendy ( 31825 )
      I assume most people don't look up or down if a website isn't reachable for only an hour. Or even a day. Such short DNS outages are therefore probably not noticed.

      Long outages would change the whole thing. Imagine that we could't read slashdot for a whole week!
  • by deepchasm ( 522082 ) on Saturday January 11, 2003 @04:54PM (#5063432)

    The typical defense is to program routers to throw away excessive ping packets, which is called rate limiting. While this protects the server, the attack streams can still create traffic jams up to the point where they are discarded.

    Well then, isn't it logical to try and rate limit/filter as close to the source as possible then? Of course this shifts responsibility...

    If all ISPs were proactive in dealing with customers machines being used as zombies to launch attacks, then internet users as a whole would have less problems trying to deal with being the target of an attack.

    A few logical steps:

    • Filter out spoofed packets - the ISP has allocated the IPs to broadband users for goodness sake, it's much easier to filter packets when you know who's sent them than on the internet at large!
    • Rate limit - no, not everything, don't go annoying the hell out of legitimate users. Something that will cut in when 100 PING packets per second go to a single host would be quite sufficient.
    • Monitor for signs of trojan infection and REACT! I couldn't believe the amount of traffic I got in my web logs when Code Red was going around. How hard is it for the ISP to e-mail or ring up their customer and tell them that they're infected?

    Some ISPs may do this, I don't know, but from the articles I read about DDoS attacks it appears that most don't.

    • They dont even have to call the customer... You could very easily write a script uses some way to check for code red... then take that IP and see what the mac is, using the DHCP table you should be able to say this mac belongs to modem XYZ which is owned by John Doe.. then email the poor sap... all automated.

      I know its possible.... im sure they wouldnt waste time if someone was uncapping their modem.
      • Ah, but the human factor kills our success-

        "To: John Doe
        From: ABC Networks
        Subject: Your computer has a virus

        Dear John Doe, according to our records, at 01/10/2002 modem XYZ was--"

        [DELETE]
        John Doe: Damned spammers.

        You really do have to make the call to make sure it gets fixed. It used to be that most people just cannot read well enough to understand a virus warning (well, once the Internet wasn't a snobby elitist club anymore, at least). Now there's the spam goggles everyone wears that filter it before they have a chance to not understand it.

        If you call them, you can do one of two things: Get someone who goes "Oh, OK. I will fix it tonight." (Then you check up on them.) Or, you get someone who goes "Oh my God oh my God what do I do, did I hurt anything this is horrible!" You have to send that person to a shop, but which is worse karma- sending a person to a shop where they're gonna get whacked 150 bucks, or not doing anything about it at all?
        • how about this one:

          To: Joe Luser's ISP
          From: XYZ network
          Subject: Attack Zombie detected

          Dear Admin,
          Here are a list of PCs within your IP space that we have
          detected launching DOS attacks against our network. Most likely,
          the majority of them have been infected by a skript-kiddie.

          ...
          Joe Luser's IP 2003-01-10 12:10:23 DOS detected
          ...

          thank you,
          Ops

          Joe Luser (later that night): How come my innernet don't work?
      • They dont even have to call the customer...

        [...] then email the poor sap...

        That reminds me of some Nimda hunting I did at work. My intranet web server kept getting hit from within the intranet in a different English speaking country. I reported it to the proper company groups, but it kept on happening. Finally I tried to hack into it using remote MMC management. I don't know why, but it let me in. I was able to copy a text file to the c$ share, start the scheduling service and use the at command to run notepad and display the text file on the desktop. The text file, of course, said something along the lines of "this pc is infected with the Nimda virus; please notify your network administrator or pc tech and unplug it from the network." I did that several times over 3 days. I think it took about 5 days before I finally quit getting hits from it.

        (I resisted the urge to try to remotely disinfect it since I didn't know what business function the PC served.)

        I can believe people ignoring emails, but people are so paranoid about viruses that if Notepad kept popping messages on their screens I would think they'd go running screaming to their administrator begging him to save their data. Maybe I should've made the note sound sinister instead of helpful and then they'd get help?

        That reminds me, I intended to check out why the hell I could administer a PC in a different country and find out if my PCs were as vulnerable. I'll put that on tomorrow's to do list.
    • Get in touch with MS for the rate limit on ammounts of pings that can be sent. Get them to code into their OS some sort of rate limit for icmp-echo-reply packets, like you described. Also, make ISPs far, FAR more aggresive when dealing with this. Is a computer sending out code red/nimda attacks? Disconnect it, write letter to the owner and disconnect them permanently after a few times. Same thing for ping flooding. If it happens often, (testing network strain over the internet shouldn't happen often) engage the same procedure as with code red/nimda infected computers.

      • Get them to code into their OS some sort of rate limit for icmp-echo-reply packets

        And it would take about 2 hours before someone compiled and distributed a "raw" ping client for windows.
    • Egress Filtering (Score:5, Insightful)

      by sczimme ( 603413 ) on Saturday January 11, 2003 @05:30PM (#5063621)

      Implementation of simple egress filtering rules at border routers or at firewalls (regardless of who owns them) would dramatically decrease the efficacy of DDoS attacks.

      If my organization owns the A.B.C network, there is no reason why any packets bearing a source address of anything other than A.B.C.* should be permitted to leave my network.

      NAT environments can implement this by dropping packets with source addresses that do not belong to the internal network.

      Of course, for this to be effective it would have be used on a broad scale, i.e. around the world...
      • Re:Egress Filtering (Score:3, Informative)

        by umofomia ( 639418 )
        • If my organization owns the A.B.C network, there is no reason why any packets bearing a source address of anything other than A.B.C.* should be permitted to leave my network.
        Easier said than done... that may be true for smaller networks, but isn't the case for larger ISPs. The IP address structure is no longer strictly heirarchical anymore (e.g. CIDR and multihomed networks) and peering relationships between different AS's make this extremely difficult to implement.
        • Easier said than done... that may be true for smaller networks, but isn't the case for larger ISPs.

          The idea is that for each host on the Internet, there is at least one independently administrated router in front of it which performs source address validation before forwarding packets further upstream to a transit network (where address validation becomes complicated).

          However, it would take quite a long time until you saw any effect, like any other DoS mitigation tactic which does not support incremental deployment.

          ICMP Traceback is promising, though. I really hope that it's as useful as it looks.
      • If my organization owns the A.B.C network, there is no reason why any packets bearing a source address of anything other than A.B.C.* should be permitted to leave my network.

        Actually, there is at least one very good reason. If company A has 2 internet connections through provider A and B, and wishes to do load balancing, but for one reason or another can not announce a single subnet through both providers, they can at least do outbound load balancing and change the source address on a per packet basis, so incoming traffic for connections initiated by someone local are evenly distributed through both connections. Obviously any connections that originate from the outside world (i.e. someone on the internet trying to view this company's website) have to be answered with the same IP that the request originally went to as the source address (or stuff will break(tm)), so this wont work in that situation, but any request that originated on the company's network, and goes out to the internet, can have the outbound traffic load balanced on a per packet basis over their multiple internet connections, even if they can't announce the same block through both providers. This however requires that some packets have a source address in the subnet of for instance provider A, when they go out through the circuit with provider B, to evenly load balance packets.

        The other option, which does not require sending packets with a source address for one provider when it goes through another, is to do it on a per connection basis, and not a per packet basis, however depending on your traffic, etc.. this may not work nearly as well.

        While obviously, the number of people implimenting something like this is few, and the benefits are many to implement anti-spoof measures, to the few people doing something like the above, it sucks. However, there is an answer, that will satisfy both causes.

        To the few people that do load balance in the method mentioned above, a simple ACL allowing only packets with either subnet as the source (for either line A or B's block), and deny all other sources, will both allow them to load balance outbound traffic, and it will protect your network (and others) (since they can't spoof any other address, other than their block with the other provider through you, as the ACL will drop it).

        For everyone else, you can use the following command on a Cisco with CEF enabled, which drops all traffic that does not have a source address that is routed through the interface the packet was received on:

        "ip verify unicast reverse-path"
        • For everyone else, you can use the following command on a Cisco with CEF enabled, which drops all traffic that does not have a source address that is routed through the interface the packet was received on:

          "ip verify unicast reverse-path"

          The way to turn on reverse-path filtering on a Linux firewall is:

          for i in /proc/sys/net/ipv4/conf/*/rp_filter; do
          echo 2 > $i
          done

      • You could still launch an attack using a reflection SYN DDoS method. This would work by having the zombies sweep all of their net neighbors with forged IP SYN packets. (This works because the travel is within the border router.) The neighbors respond with SYN/ACK packets to the forged IP address. The SYN/ACK packet would pass the border router because the source IP would be valid.

        Of course, unless the zombies were smart enough to know the IP range within the border router, you'd still get a metric buttload of invalid packets at the border router. Some kind of threshhold alarm might be a good idea -- but then there's the problem of locating what machine within the border is generating the packets...

        In a perfect world, the best solution would be that people didn't let their machines get 0wn3d in the first place, [Insert maniacal laughter]!

        Egress filtering is a good thing but it's not a complete solution. (And it's a good thing that I turned back from the Insufficient-light Side of the Hack many years ago.) Here's an explaination of a reflection attack. [grc.com] (Yes, that "end of the Internet" grc. :^)

        • Don't try this at home kids. (Use someone else's home, Narf!)

          I guess that I shouldn't worry, unlike script-kiddie h4x0rs, Slashdot users are intelligent, wise .. , never do stupid things .. , never abuse the system .. oh shit

  • by Jamyang ( 605452 ) on Saturday January 11, 2003 @04:55PM (#5063441) Homepage
    How to Protect the DNS [icannwatch.org] posted to icannwatch [icannwatch.org] in October includes Karl Auerbach's [cavebear.com] DNS-in-box emergency toolkit:
    I've had this idea: A CDROM that contains all the pieces that one needs to build an emergency DNS service for one's home, company, school, or whatever..

    apparentlyicannwatch [icannwatch.org]new year resolution was to migrate [icannwatch.org] from nuke to slash.

  • TLD Question (Score:5, Interesting)

    by Farley Mullet ( 604326 ) on Saturday January 11, 2003 @05:02PM (#5063478)

    I'm not an expert, but as I understand it, DNS attacks are relatively benign, since DNS info is cached all over the place and doesn't change much anyway (this is essentially what the article says). Now, the author seems much more worried about attackts against Top Level Domains, because of reasons related to the nature of the information that TLD servers have, and he suggests a few techniques that they could use. What he doesn't say is what techniques the TLD's are using currently, and how secure they are.

    Does anyone out there on /. know?

    • Now, the author seems much more worried about attackts against Top Level Domains, because of reasons related to the nature of the information that TLD servers have, and he suggests a few techniques that they could use. What he doesn't say is what techniques the TLD's are using currently, and how secure they are.

      http://cr.yp.to/djbdns/forgery.html [cr.yp.to]
  • Hrrrmmm (Score:5, Funny)

    by Anonymous Coward on Saturday January 11, 2003 @05:10PM (#5063517)
    "...the October strike on the root servers stopped abruptly after about an hour, probably to make it harder for law enforcement to trace."

    Hrrrrmmm. That makes it look deliberate. Hrrrrmmm.
  • by evilviper ( 135110 ) on Saturday January 11, 2003 @05:33PM (#5063639) Journal
    I have a question... Why does a cache have to expire?

    Why not allow the admin to specify the maximum diskspace that the cache can use up, and then only prun the records when that (possibly huge) database grows too large? In addition, DNS records should not just arbitrarily expire...

    If a record has not reached it "expire" date, the cache is just fine. If a record HAS reached it's "expire", it should still remail valid UNTIL the DNS server has been able to get a valid update. Now, that would allow large DNS servers to maintain quite a bit of functionality even if all other DNS servers go down, and would do so while requiring only the most popular queries are saved on the server (so not everyone has to become a full root DNS server).
    • Any caching system must have a way to update itself or its data will decay and not keep up with changes. Companies change ISP or hosting services all the time, so there DNS entries must be able to change in a timely manner to reflect the IP address changes. Also when a domain name is not renewed it's DNS entries should likewise expire. There are many reasons why an out-of-date cache is bad.

      Generally there are two ways to keep caches relatively fresh: expire records based on some precondition (such as time) or have the master source send out notifications when data was changed. And DNS can do BOTH.

      First, there are three kinds of expirations in DNS, all time based where the periods are selected by the owner of the domain. The first is when you attempt to look up a name which doesn't exist; that's called negative caching and is typically set to just and hour or two. The next is the refresh time which indicates when an entry in a cache should be checked to see if it is still current and is typically about a half a day. And finally the time-to-live is the time after which the cache entry is forcibly thrown away, and is usually set to a couple weeks or more.

      Finally DNS servers can coordinate notification messages, whereby the primary name server for a domain will send a message to any secondaries whenever the data has changed. This allows dirty cache entries to be flushed out almost immediately . But DNS notifications are usually used only between coordinated DNS servers, and not all the way to your home PC.

      It should be noted though that most end users' operating systems do not really perform DNS caching very well if at all...usually it is your ISP that is doing the caching. Windows users are mostly out of luck unless you are running in a server or enterprise configuration. Linux can very easily run a caching nameserver if you install the package. I don't know what the Macs do by default.
      • The next is the refresh time which indicates when an entry in a cache should be checked to see if it is still current and is typically about a half a day.

        This is only for DNS servers such as BIND that use AXFR to update slaves.

        Finally DNS servers can coordinate notification messages, whereby the primary name server for a domain will send a message to any secondaries whenever the data has changed.

        Modern DNS servers use better methods such as rsync over SSH or database replication, which provide real security, instant updates and more efficient network usage.
      • You, as well, did not understand what I was suggesting. I would recomend that you read some of the other messages in the thread to get some idea.
    • Why does a cache have to expire?

      Because I like to actually be able to change my DNS records after they are published.

      In addition, DNS records should not just arbitrarily expire...

      They don't arbitrarily expire. They expire when the TTL for the record has been reached.

      If a record HAS reached it's "expire", it should still remail valid UNTIL the DNS server has been able to get a valid update.

      That would allow an attacker to blind your DNS resolver to DNS changes by keeping it from contacting a remote DNS server. And if the same attacker can poison your cache, the cache will keep the poisoned records forever.
      • For the first two, I'd just say that you, as well as many others, did not understand what I was saying...

        That would allow an attacker to blind your DNS resolver to DNS changes by keeping it from contacting a remote DNS server. And if the same attacker can poison your cache, the cache will keep the poisoned records forever.

        There are so many flaws with this logic that I'm not sure where to begin.

        First of all, if an attackers has poisoned your cache, that almost always requires Admin intervention anyhow.

        Second, if an attacker can blind your DNS server to updates, in the current scheme, your DNS would completely fail, instead of one record being invalid, so this is not a capability attackers have, and even if they did, you would be much better off with my modifications, than with the current scheme.
  • by nniillss ( 577580 ) on Saturday January 11, 2003 @05:33PM (#5063645)
    DNS caching kept most people from noticing this assault. In very rough terms, if the root servers are disrupted, only about 1 percent of the Internet should notice for every two hours the attack continues--so it would take about a week for an attack to have a full effect. In this cat-and-mouse game between the attackers and network operators, defenders count on having time to respond to an assault.
  • What we can do (Score:3, Insightful)

    by karmawarrior ( 311177 ) on Saturday January 11, 2003 @05:49PM (#5063722) Journal
    The Internet's Achilie's heel is it's awesome complexity and size. The result is that it's very east for a group to appear, do damage, and then disappear, and never be traced. Worse still, the ease with which this can be done is itself an incentive - a downtime of DNS, or of a Microsoft server, or of Yahoo, is seen as unimportant, easy, and untracable, and people - for whatever reasons, be they sociopathic, vengeful, curious, or egocentric - are attracted to perform these kinds of acts.

    It's difficult for any reasonable person to know where to begin solving these issues. Traditionally, nailing down machines and networks so they are more secure has been seen as the best approach, but there's little anyone can do about having bandwidth used up by unaccountable "hacked" machines, as is seemingly more and more the modus-operandi.

    Attempts to trace crackers are frequently wastes of time, and stiffer penalties for hackers are compromised by the fact that it's hard to actually catch the hackers in the first place. The situation is made worse that many of the most destructive hackers do not, themselves, set up anything beyond sets of scripts distributed to and run by suckers - so-called "script kiddies".

    Given that hackers usually work by taking over other machines and coopting them into damaging clusters that can cause all manner of problems, less focus than you'd expect is put onto making machines secure in the first place. The responsibility for putting a computer on the Internet is that of a system administrator, but frequently system administrators are incompetent, and will happily leave computers hooked up to the Internet without ensuring that they're "good Internet citizens". Bugs are left unpatched, if the system administrators have even taken the trouble to discover if there are any problems in the first place. This is, in some ways, the equivalent of leaving an open gun in the middle of a street - even the most pro-gun advocates would argue that such an act would be dangerously incompetent. But putting a farm of servers on the Internet, and ignoring security issues completely, has become a widespread disease.

    There is a solution, and that's to make system adminstrators responsible for their own computers. An administrator should be assumed, by default, to be responsible for any damage caused by hardware under his or her control unless it can be shown that there's little the admin could reasonably have done to prevent their machine from being hijacked. Clearly, a server unpatched a few days after a bug report, or a compromise unpatched that has never been publically documented, is not the fault of an admin, but leaving a server unpatched years after a compromise has been documented and patches have been available certainly is. Unlike hackers, it is easy to discover who is responsible for a compromised computer system. So issues of accountability are not a problem here.

    Couple this with suitably harsh punishments, and not only will system administrators think twice before, say, leaving IIS 4 out in the wild vulnerable to NIMDA, but hackers too - for the same reasons as they avoid attacking hospital systems, etc - will think twice about compromising someone else's system. Fines for first offenses and very minor breaches can be followed by bigger deterents. If you were going to release a DoS attack into the wild, but knew that the result would be that many, many, system administrators would be physically castrated because of your actions, would you still do it?

    Of course not. But even if you were, the fact that someone has been willing to allow their system to be used to close the DNS system, or take Yahoo offline, ought to be reason enough to be willing to consider such drastic remedies. Castration may sound harsh, but compared to modern American prison conditions, it's a relatively minor penalty for the system administrator to pay, and will merely result in discomfort combined with removal from the gene-pool. At the same time, such an experience will ensure that they take better care of their systems in future, without removing someone who might have skills critical to their employer's well being from being taken out of the job market.

    The assumption has always been made that incompetent system administrators deserve no blame when their systems are hijacked and used for evil. This assumption has to change, and we must be willing to force this epidemic of bad administration to be resolved. Only by securing the systems of the Internet can we achieve a secure Internet. Only by making the consequences of hacking real and brutal can we create an adequate response to the notion that hacking, per-se, is not wrong, that it causes no damage.

    This quagmire of people considering system administrators the innocents in computer security when they are themselves the most responsible for problems and holes will not disappear by itself. Unless people are prepared to actually act, not just talk about it on Slashdot, nothing will ever get done. Apathy is not an option.

    You can help by getting off your rear and writing to your congressman [house.gov] or senator [senate.gov] [senate.gov]. Write also to Jack Valenti [mpaa.org], the CEO and chair of the MPAA, whose address and telephone number can be found at the About the MPAA page [mpaa.org] [mpaa.org]. Write too to Bill Gates [mailto] [mailto], Chief of Technologies and thus in overall charge of security systems built into operating systems like Windows NT, at Microsoft. Tell them security is an important issue, and is being compromised by a failure to make those responsible for security accountable for their failures. Tell them that only by real, brutal, justice meted out to those who are irresponsible on the Internet will hacking be dealt with. Tell them that you believe it is a reasonable response to hacking to ensure that administrators who fail time and time again are castrated, and that castration is a reasonable punishment that will ensure a minimal impact on an administrator's employer while serving as a huge deterent against hackers and against incompetence. Tell them that you appreciate the work being done to patch servers by competent administrators but that if incompetent admins are not kept accountable, you will be forced to use less and less secure and intelligently designed alternatives. Let them know that SMP may make or break whether you can efficiently deploy OpenBSD on your workstations and servers. Explain the concerns you have about freedom, openness, and choice, and how poor security harms all three. Let your legislators know that this is an issue that effects YOU directly, that YOU vote, and that your vote will be influenced, indeed dependent, on their policies concerning maladministration of computer systems connected to the public Internet.

    You CAN make a difference. Don't treat voting as a right, treat it as a duty. Keep informed, keep your political representatives informed on how you feel. And, most importantly of all, vote.

  • by fermion ( 181285 ) on Saturday January 11, 2003 @05:49PM (#5063728) Homepage Journal
    October attack was a DDoS "ping" attack. The attackers broke into machines on the Internet (popularly called "zombies") and programmed them to send streams of forged packets at the 13 DNS root servers via intermediary legitimate machines.
    It seems to me that this is another call for more secure computers. If the "zombies" were not so easy to create, then such attacks would not be so easy to mount. I think security has gotten better, but there is still great room for improvements. I have some random thoughts that might help.

    First, broadband providers should not sell bandwidth without standard firewall. I do not see such a proposition to be expensive, as a standalone unit is quite cheap, and the cost to integrate such circuitry into a DSL or cable box should be even less expensive. Broadband providers should stop their resistance to home networking and use bandwidth caps or other mechanism, if necessary.

    Second, the default setting in web browsers must be more strict. Web browser should not automatically accept third party cookies or images. Web browser should not automatically pop up new windows or redirect to third party sites. Advertising should not be an issue. I know of no legitimate web site that requires third party domains. For instance /. uses "images.slashdot.org" and the New York Times uses "graphics7.nytimes.com". Of course, these default setting should be adjustable, with the appropriate message stating that web sites that use such techniques are likely to be illegitimate. I know of a few sites that require all imagers and cookies to be accepted, but I consider those to be fraudulent.

    Third, email mail programs should by default render email as plain text. There should a button to allow the mail to render HTML and images. There should be a method to remember domains that will always render or never render. Again, third party domain should not render automatically. In addition, companies need to not promote HTML and image based email. Apple is particularly guilty of this. The emails they send tend to be illegible without images.

    Fourth, the root must be the responsibility of the user or a third agent must have full liability for a hack. This should be basic common sense, but it apparently is not. MS wants access to the root of all Windows machines, but I do not see MS saying they will accept all responsibility for damage. Likewise, the RIAA wants access to everyone root, but again, are they going to pay for the time it takes to reinstall an OS. I think not. With privilege come responsibility. Without responsibility all you have are children playing with matches.

    • Advertising should not be an issue. I know of no legitimate web site that requires third party domains. For instance /. uses "images.slashdot.org" and the New York Times uses "graphics7.nytimes.com".
      Nice idea, but what about the ad-supported sites that use agencies to get advertising, rather than selling ad space direct to the advertiser. Then it makes perfect sense for www.smallsite.com to have an image on it from images.adagency.com.
      I agree entirely that html email should be banished from the face of the net, and third party cookies serve litle or no purpose.
  • Question: (Score:4, Interesting)

    by I Am The Owl ( 531076 ) on Saturday January 11, 2003 @06:16PM (#5063853) Homepage Journal
    the October strike on the root servers stopped abruptly after about an hour, probably to make it harder for law enforcement to trace.

    Whose laws are being enforced, and upon whom?

  • by NaveWeiss ( 567082 ) on Saturday January 11, 2003 @06:33PM (#5063945) Homepage Journal
    The problem with the current ICMP standards are that it's too damn easy to spoof the original addresses, so you can send crap and nobody would know were it came from.

    I was wondering - does IPv6 solve this problem (using some sort of digital signatures or another ingenious way), or sites will be still vulnureable to script kiddies?
    • Not necessarily, it depends upon what you are protecting against. The advantage of ICMP or ICMPv6 (the equivalent layer in IPv6) is that they are very lightweight. There is no expensive crypto operations or other computation, so it is ideal to help protect against DoS floods.

      IPv6 can though provide a very secure layer (IPsec) but it comes at an expense. It is not something that you would want to use for DNS queries, where the name of the game is speed and the number of hosts involved can be thousands or even millions.

      But for the less voluminous DNS messages, such as zone transfers which occur between mirrors, authenticity is much more of a concern. IPsec could be very useful there, but it is probably unnecessary as DNS already has it's own security protocol built into it (DNSSEC).

      In general though IPv6 does provide many benefits over IPv4 and in some ways does provide many new tools to address the DDoS and script kiddies; but like any single technology it is not a super pill that makes all the ills go away.
    • The problem with the current ICMP standards are that it's too damn easy to spoof the original addresses, so you can send crap and nobody would know were it came from.

      This will unfortunately remain a problem for the same reason it'll remain a problem with email - unless all possible nodes that traffic can be routed through are known and trusted, you have to take much of your routing information on faith.
  • by Skapare ( 16644 ) on Saturday January 11, 2003 @07:31PM (#5064225) Homepage

    End users don't need root or TLD servers; they just need to have DNS queries answered. That's why normally, they are configured to query the ISP or corporate DNS servers, which in turn do the recursive query to root, TLD, and remote DNS servers. Given that, consider the possibility of the ISP or corporate data center intercepting any queries done (as if the end user were running a recursive DNS server instead of a basic resolver) and handle them through a local cache (within the ISP or corporate data center). It won't break normal use. It won't break even if someone is running their own DNS (although they will get a cached response instead of an authoritative one). It will prevent a coordinate attack-load from the network that does this.

    They talk about root and TLD servers located at major points where lots of ISPs meet, which poses a potential risk of a lot of bandwidth that can hit a DNS server. So my first thought was why not have multiple separate servers with the same IP address, each serving part of the bandwidth, much like load balancing. And then, you don't even have to have them at the exchange point, either; they can be in the ISP data center. They could be run as mimic authoritative servers if getting zone data is possible, or just intercepting and caching.

    • Given that, consider the possibility of the ISP or corporate data center intercepting any queries done (as if the end user were running a recursive DNS server instead of a basic resolver) and handle them through a local cache (within the ISP or corporate data center). It won't break normal use.

      Wrong. I run my own local DNS resolver, dnscache [cr.yp.to]. I don't trust my ISP to manage a DNS resolver properly. What if they are running a version of BIND vulnerable to poison [cr.yp.to] or other issues [cr.yp.to]? What if I am testing DNS resolution and need to flush the cache? (I do this routinely.) They also don't need to see every DNS query I make. If they want to sniff and parse packets, fine, but no need to make it any easier on them.

      It won't break even if someone is running their own DNS (although they will get a cached response instead of an authoritative one).

      That would be possible only if they were in fact intercepting every single DNS packet and rewriting it. It would make it impossible for me to perform diagnostic queries to DNS servers. And unless they were doing some very complex packet rewriting, it would break if an authoritative server was providing different information depending on the IP address that sent the query.

      If you can't even get ISPs to perform egress filtering, why would they do something as stupid and broken as this? Egress filtering would do much more to stop these types of attacks.

      Besides, how does this stop me if I am the ISP? There are plenty vulnerable machines that are on much better connections than dialup or broadband.
      • What egress filtering? The kind that blocks DNS queries sent to the root or TLD servers with a source address of the actual machine doing the querying, while under control of a virus or trojan that has infected a million machines? Sure egress filtering will stop a few bad actors who are forging source addresses, such as bouncing attacks off of broadcast responders. And egress filtering is not easy to do on large high traffic routers where there are a few hundred prefixes involved, belonging to the ISP and multitudes of their customers. You think an access list that big isn't going to bring a router to its knees?

  • by defile ( 1059 ) on Sunday January 12, 2003 @02:25AM (#5065537) Homepage Journal

    ..if the flood is randomly generated queries from thousands of compromised hosts. There would be no way to separate flood traffic from legit traffic. A worm could do this, or a teenager with a lot of time on their hands.

    It's easier for peons to get together a smurf list to attack the roots, but a nice set of compromised hosts issuing bogus spoofed queries would be just devastating.

    The solution is not more root servers. Attackers gain compromised hosts for free, root servers must be paid for. The solution is to make some kind of massively distributed root server system.

  • Very interesting. The fact that the DDOS attack stopped so suddenly would imply that the goal was not to attack -- but to test.

    Now, that could be an actual government, military operation [including our own], as part of a general preparedness effort for war: when you strike, you use a combination of surprise attacks to make your main attack more effective.

    Or it could be terrorists, running a weapons test in the same way.

    Or it could be some grad student, testing out a theory of his. It just doesn't sound like a normal cracker.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...