Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

98% of DNS Queries at the Root Level are Unnecessary 435

LEPP writes "Scientists at the San Diego Supercomputer Centerfound that 98% of the DNS queries at the root level are unnecessary. This doesn't even take into account the 99.9% of web pages suck or are unnecessary anyways. This means that the remaining 2% of necessary DNS queries are probably not necessary either."
This discussion has been archived. No new comments can be posted.

98% of DNS Queries at the Root Level are Unnecessary

Comments Filter:
  • by Anonymous Coward on Friday January 24, 2003 @11:08AM (#5151175)
    99% of slashdot posts are unnecessary.
  • Highlight... (Score:2, Informative)

    by swordboy ( 472941 )
    About 12 percent of the queries received by the root server on Oct. 4, were for nonexistent top-level domains, such as ".elvis"

    Now there's your 2 percenter right there!
    • Re:Highlight... (Score:5, Informative)

      by Zeinfeld ( 263942 ) on Friday January 24, 2003 @11:28AM (#5151296) Homepage
      About 12 percent of the queries received by the root server on Oct. 4, were for nonexistent top-level domains, such as ".elvis"

      If the authors actually thought how the DNS works they would realise the reason for this. A DNS server that gets a request for .com will consult the root the first time and then cache the result. So even though the server might then get a million hits in .com it won't ask the root again.

      If the server tries to query for a non existent domain it will get back a 'non-existent' response. Now it will cache that response for some time but the chances of getting a cache hit is actually pretty low.

      So if you have a properly configured DNS with a bunch of web surfers that view 1 million pages in 20 TLDs and 1,000 bogus ones they will generate 20 hits they would classify as genuine and 1,000 that were 'unnecessary'.

      That is how the system is meant to work.

      The 70% of repeated requests are likely to include outright attacks as well as misconfigured DNS systems.

      The problem dealing with these issues is that a DNS query is pretty cheap to handle, cheaper in fact than most of the proposed defenses. It is probably more expensive for a DNS server to check IPs against a blacklist than to just return the damn data...

      • Re:Highlight... (Score:2, Insightful)

        by Goodbyte ( 539941 )
        Finally someone who makes a relevant comment. Though I wonder how the 'search from address bar'-feature has affected the number of non-existent queries.
        • Re:Highlight... (Score:5, Informative)

          by Zeinfeld ( 263942 ) on Friday January 24, 2003 @12:01PM (#5151521) Homepage
          Though I wonder how the 'search from address bar'-feature has affected the number of non-existent queries.

          A way to tell would be to see how many of the queries were looking for mx records.

          I suspect that people using dummy email addresses like 'a@b.c' for subscriptions are another major cause of the misfires.

          The browsers doing search from the address bar probably reduces the number of misfires. A modern browser will only go to DNS if it sees something like foo.bar. If it just sees foo it will typically try foo.com and then go bang a search engine.

          Another reason I suspect spam is a major issue in the misfires is that lots of spam filters do lookup on sender addresses and those frequently point to non existent domains. Also the spam senders rarely do the most basic filtering on their lists - you can tell that since every now and again you get a spam with a full sender list at the top and you can see the broken addresses right there.

      • Re:Highlight... (Score:5, Informative)

        by pde ( 28299 ) on Friday January 24, 2003 @12:35PM (#5151824) Homepage
        If the authors actually thought how the DNS works they would realise the reason for this. A DNS server that gets a request for .com will consult the root the first time and then cache the result. So even though the server might then get a million hits in .com it won't ask the root again.

        Well, that's the theory. In practice, however, there are millions of servers out there that do not cache NXDOMAIN at all, and just keep querying, over and over and over again, for TLDs that they've already been told don't exist. Microsoft's name server has been known to do this.

        At one point, f.gtld-servers.net was seeing millions of repeated queries per hour from the same two .mil servers asking the same question and refusing to accept the NXDOMAIN. For long periods, these two servers were asking the same question multiple times per click of F's timer. That's.. ummm.. Bad. I suggest that you read the actual CAIDA paper, and the other papers on the subject that Evi Nemeth and others at CAIDA have produced. They *have* thought about how the DNS actually works in practice. You've only thought about how it would work if every implementation worked perfectly, according to your expectations.

        • Re:Highlight... (Score:3, Informative)

          by Zeinfeld ( 263942 )
          Well, that's the theory. In practice, however, there are millions of servers out there that do not cache NXDOMAIN at all,

          That is hardly suprising since a lot of servers don't even cache the positive hits.

          The report said 70% of the hits were repeated requests. Again this is not too suprising, the root zone caches really well. There are less than 200 domains after all and only 20 of those have a significant degree of activity. The TLD configurations change so infrequently that the TTL could be set at a month without inconveniencing anyone.

          So the 'necessary' traffic for the root servers is negligible. Even with a million odd DNS servers out there each root need see no more than a few tens of thousands of hits an hour.

          It makes no real difference since the roots have to be scaled to be able to survive a sustained DDoS attack for at least as long as it takes remediation measures to kick in. Get rid off all the bozo queries and you still need the same size box because of the script kiddies.

          There are a bunch of changes that could be put in place that would reduce the DDoS problem. First we could follow the proposals of Mark Kosters and Paul Vixie to start using anycast (this looks like it is going ahead).

          Another thing we could do is to change the DNS logic so that servers keep records in their cache beyond the TTL and use those as backup if the root or TLD is unavailable. Then even a DDoS that succeeded would have only marginal effect.

    • Fess up... (Score:5, Funny)

      by FuzzyDaddy ( 584528 ) on Friday January 24, 2003 @03:26PM (#5153114) Journal
      How many people just went and checked to see if there's an .elvis TLD?


      Actually, I've always had a theory that Microsoft coined ".msn" because they wanted to get their own top level domain.

  • by Anonymous Coward
    And they assumed the other 12 were exactly the same? Wouldn't looking at 2 at least be merited?
  • AOL (Score:5, Interesting)

    by almeida ( 98786 ) on Friday January 24, 2003 @11:11AM (#5151191)
    On a similar note, I noticed that AOL causes a lot of DNS lookups. From what I can see from my firewall logs, each TCP connection from an AOL user is handled by a separate proxy. Each proxy then does its own lookup on the host. So, for a normal sized webpage with some images or whatever, you get like 10 TCP connections for the content and 10 UDP connections for the DNS lookup. Seems kind of excessive to me.
    • Re:AOL (Score:5, Interesting)

      by cyb97 ( 520582 ) <cyb97@noxtension.com> on Friday January 24, 2003 @11:15AM (#5151218) Homepage Journal
      AOL always screws up webpage statistics (which I guess can be a good thing as the only dufuzes that really really care about statistics are marketers?)...

      I can't count the number of times I've seen a massive spike in number of "unique visitors" just to look at the hosts and find *.proxy.aol.com filling the whole thing....

      • Re:AOL (Score:3, Interesting)

        by micromoog ( 206608 )
        Hmmmmmmm, I wonder if this could even constiture fraud. If web publishers believe a larger number of AOL'ers are visiting their site than actually do, wouldn't they be inclined to pay more for adverts on AOL's portal?
        • Re:AOL (Score:4, Informative)

          by toast0 ( 63707 ) <slashdotinducedspam@enslaves.us> on Friday January 24, 2003 @11:41AM (#5151389) Homepage
          i doubt it.

          it is common knowledge that aolusers come through aol's proxies, and the proxy hostnames contain proxy in them, so it should be fairly obvious

          also, anybody who is running web statistics should know the following things:
          1) web statistics are inaccurate
          2) proxies screw up web statistics
          3) not all proxies are visible
          4) refer to 1 and 2

        • It's a wash (Score:2, Interesting)

          by ShaggyZet ( 74769 )
          It all comes out in the end anyway. Say AOL has 100 proxies. If 10,000 AOL users visit your site, then it'll look like only 100 unique visitors. Granted this is more than the 1 unique visitor that it would look like for most proxies, but it's still less than the actual number, not more. Presumably there are significantly less proxies at AOL than there are users. It only really matters to small sites like yours and mine, where we're getting excited about each and every visitor, and 10 all at once makes us need a new keyboard.
        • Re:AOL (Score:2, Interesting)

          by HD Webdev ( 247266 )
          Not really. AOL isn't doing it for any Evil Intent.

          Usually, the most important data is the first page hit. WHICH PAGE/SITE REFERRED THE PERSON TO THIS WEB SITE? In most cases, where the person is connecting from is not nearly important as where they found the link.

          An ecommerce example: When showing site statistics, I advise my ecommerce clients to put their money in the referring sites that yield the highest 'bought a product' ratio.*

          Once in a while, the client will be awed by the AOL total hits statistics and want to put money there. I then explain that they will most likely increase their bandwidth use with little return and have to pay AOL for the privilege.

          A site that depends on banner ads example:

          Put money in referrer sites where the referred person viewed the most pages AND clicked the most ads per person. Accurate statistics for that are easy with PHP [php.net] scripting (or your language of choice). Bonus points for using a script that counts returning visitors and compares that to where they were originally referred from.

          * It has crossed my mind that I could be mean/funny and generate 'how many attempts it takes an AOL user to fill out a form correctly' statistics.
    • Re:AOL (Score:2, Informative)

      by Goodbyte ( 539941 )
      Maybe so, but all requests should go to a dns-server at AOL which will cache the results. So if all users make a request for a domain in the same top-domain, there should still only be one request to the root-server.
    • Re:AOL (Score:3, Interesting)

      by shoppa ( 464619 )
      First of all, it's mostly a given that AOL's name service is going to suck rocks. But the way you describe is the opposite of the problem they had a few years ago:
      • AOL would cache DNS lookups for much much longer than the expire time
      • Sometimes this cache would live on for weeks past expire time
      Maybe the current situation is an over-reaction to the bad effects caused by the previous screwups.
      • Re:AOL (Score:3, Interesting)

        by TheTomcat ( 53158 )
        AOL is not alone in this. I've seen many (largish) ISPs ignore DNS TTLs. Makes switching IPs a disaster.

        S
    • by robbo ( 4388 ) <slashdot&simra,net> on Friday January 24, 2003 @12:03PM (#5151532)
      That's a local problem, between the user and AOL's DNS servers. The article is descibing a different, higher-level problem between, for example, AOL's DNS servers and the root-level servers. If an AOL user's machine makes ten DNS requests for the same host, only one request should propagate past AOL's nameservers, but instead a misconfigured DNS will propagate all ten.

      I can suddenly see lots of slashdot users thinking-- oh, I should fix my firewall, I have all these DNS requests; but that's normal operation for a client workstation. Your firewall would be broken only if all your DNS queries failed, and you'd know it pretty fast if that were the case.
    • Each proxy then does its own lookup on the host.

      But does all the proxies do those lookups on different DNS servers? I doubt it. You could have a large number of proxies using just a single DNS server, although you would probably use two or three for redundancy. But the redundant servers could still query each other first and only outside if the result was missing or the other server was down. And then again you shouldn't need that many TLD lookups almost no matter how stupid you do it.
  • by Anonymous Coward
    Real man know IPs.
    • I just decided to put the entire Internet into my hosts file.

      bbh
      • by jovlinger ( 55075 ) on Friday January 24, 2003 @11:50AM (#5151433) Homepage
        ya know, that's not impossible these days.
        What with the private subnets you can't get to, and coorporations buying up whole class IP blocks, you're not going to need to map every single IP to a set of names.

        Say you need to map 2**30 names. Give each name 256 bytes to list the hosts using that ip. You've just used 256GB. Alot, yes, but I'm willing to bet at least one person reading this has that much storage dedicated to MP3s.
        • Out of curiousity, does anyone know the limitations of a host file either on Linux or a commercial Unix, and how the performance would be if one had a 256GB host file? What search algorthm it uses, etc?

          It may very well be faster to use DNS.

          dild@tr0n
          • Wow crazy. I thought you just made that number up, but if I make a text file with "xxx.xxx.xxx.xxx" in it, find out how big that is, and multiply times the number of hosts in ipv4, I get 268.4 gigabytes. Very interesting. ipv6 is gonna keep that dream a dream.
  • .elvis? (Score:2, Funny)

    by Anonymous Coward
    That's a thought! But we'd have to create servers for .vim, .pico, and .emacs as well...

  • 98% of the DNS queries at the root level are unnecessary. [...] This means that the remaining 2% of necessary DNS queries are probably not necessary either."

    Uhh... right, eliminate 100% of the root queries, they aren't needed..

    sheeeesh..
  • Badly written (Score:2, Insightful)

    The following quote seems badly written to me...
    About 12 percent of the queries received by the root server on Oct. 4, were for nonexistent top-level domains, such as ".elvis", ".corp", and ".localhost". Registered top-level domains include country codes such as ".au" for Australia, ".jp" for Japan, or ".us" for the United States, as well as generic domains such as ".com", ".net", and ".edu". In addition, 7 percent of all the queries already contained an IP address instead of a host name, which made the job of mapping it to an IP address irrelevant.
    Reading through it takes a couple of attempts to realise that they're not classing the ccTLDs and gTLDs as in the 12 percent of nonexistent TLDs but they're providing them as examples of what is a real domain - yet they take more of the paragraph to do this than to explain the nonexistent TLDs.

    Just my 2p/2 worth.
  • Why... (Score:5, Insightful)

    by jascat ( 602034 ) on Friday January 24, 2003 @11:16AM (#5151222)
    is it that hard to configure a firewall to explicitly allow outgoing traffic rather than allow all? It seems that everyone thinks that the only bad traffic is the stuff coming in from the outside...
    • Re:Why... (Score:5, Informative)

      by PhxBlue ( 562201 ) on Friday January 24, 2003 @11:58AM (#5151489) Homepage Journal

      Excellent point, and I hope whomever has modpoints today will mod the parent up. Your PC is a sieve of information even with nothing more than a web browser and E-mail client. When you install IM applications or, gods forbid, file-sharing applications like KaZaa, the sieve becomes a fount.

      I've made a couple other posts regarding this in the past week or so, to point out that most applications don't need access to port 80, for example. E-mail doesn't need it, and IM programs certainly don't need it. ICQ uses a port in the 400 range somewhere, IIRC, for its message traffic; but it uses port 80 to report usage statistics to Mirabilis and to download banners. So does it really need port 80? Nope--you can save yourself bandwidth and gain privacy by blocking it.

      The list goes on, of course; but my biggest gain from firewalling my PC has been the freedom to restrict outgoing traffic.

  • It's no wonder these servers have so many problems - there's thirteen of them! They need a lucky #14 - a Bilbo Baggins for their horde of dwarves. That'll stop those DoS attacks and unnecessary requests right away!

  • 99.9% (Score:5, Insightful)

    by dirvish ( 574948 ) <dirvishNO@SPAMfoundnews.com> on Friday January 24, 2003 @11:16AM (#5151227) Homepage Journal
    This doesn't even take into account the 99.9% of web pages suck or are unnecessary anyways.

    What standard is this based on? My website wite sucks and is only necessary for my own amusement but it is similar to my favorite kind of sites on the web. I would use the web a lot less if it wasn't for those 99.9% of web sites. Most blogs, for instance, suck and are unnecessary but at the same time the total of all the blogs is having a big impact on news outlets and the media.
  • News you can use (Score:5, Interesting)

    by El_Smack ( 267329 ) on Friday January 24, 2003 @11:20AM (#5151246)
    From the article:
    "Researchers believe that many bad requests occur because organizations have misconfigured packet filters and firewalls, security mechanisms intended to restrict certain types of network traffic. When packet filters and firewalls allow outgoing DNS queries, but block the resulting incoming responses..."
    It's nice to see a story with info I can take and use. This is actually "stuff that matters".
    Kudos to the researchers, and now I am off to check my firewall.
    • Re:News you can use (Score:2, Interesting)

      by LordNimon ( 85072 )
      I have a Linksys cable modem firewall/router. How do I fix mine? It appears that pretty much every firewall allows outgoing requests but block incoming ones.
      • by El_Smack ( 267329 )
        Well, I was more speaking of the guys who cobbled together their own firewall using 2 NIC's and their OS and software of choice. That's what I did, and I easily could have screwed up an iptables command or a default rule and blocked incoming DNS.
        With a purchased firewall, especially if you can't edit it yourself, I would have to assume (uh oh) that the manufacturer got the basics right, at least. I really don't know of a way to check those. You could try an online port scan like sygate.com offers. But your firewall is probably using a "statefull" method, which would allow DNS to come back if you initiated the request, but it would block a NEW request that originated outside. So it will probably say your port 53 is blocked.
    • Re:News you can use (Score:5, Informative)

      by lanner ( 107308 ) on Friday January 24, 2003 @02:17PM (#5152654)

      I crazy about my home network firewall configuration, and when it is under my authority, the firewall rules of the business to which I am employed at any time.

      An important but often left out part of a firewall's configuration is logging. Attempts to do things that should never be done should not just be dropped, they should be logged and then brought to your attention.

      Some examples;

      If your local network is 192.168.2.128/29 then any outgoing packet that does not have a source within the range of 192.168.2.129 and 192.168.2.134 should be dropped AND logged. Someone on YOUR network is either stupid or trying to spoof someone!

      The same thing goes for ports and protocols that should not be outgoing on your network.

      Okay, so getting probed on TCP 80 is getting annoying now that you are logging everything that is not allowed. Fine, explicitly drop it without logging.

      Conform to RFC1918 -- don't route IP private space to or from the Internet. Route it to /dev/null or null0 AND filter it. AND if it came from YOUR network, log it. The quantity of ISPs that fail to conform to this is astounding and scary. You don't need this traffic moving around your ISP -- use GRE or MPLS tunneling instead.

      Also, conform to BCP38 ftp://ftp.rfc-editor.org/in-notes/bcp/bcp38.txt

      After tuning your firewall logging filters, you will find that when new attacks occur or something is up, you notice. Otherwise, you are blind and dumb to what your firewall is doing, which means that you are blind and dumb to what your network is doing.

  • Ignant (Score:5, Interesting)

    by edraven ( 45764 ) on Friday January 24, 2003 @11:21AM (#5151250)
    In addition, 7 percent of all the queries already contained an IP address instead of a host name, which made the job of mapping it to an IP address irrelevant.


    Is it just me, or is this a description of a reverse lookup? How does that qualify as unnecessary? This is a pretty common step in troubleshooting, and some software does a reverse lookup following a forward lookup to verify that the hostname it gets back is the same one it started with.

    Chuckles
    • Re:Ignant (Score:2, Insightful)

      by deepchasm ( 522082 )
      No, I assume the researchers are not that stupid.

      They mean that some software, designed to take a fully qualified domain name as input, *always* looks up the input by DNS, even if someone has typed in an IP rather than a hostname - making the lookup unnecessary.

      If it was a reverse lookup it wouldn't just contain an ip (e.g. "1.2.3.4"), it would be "4.3.2.1.in-addr.arpa", that's how reverse lookup works.
    • Re:Ignant (Score:5, Informative)

      by dachshund ( 300733 ) on Friday January 24, 2003 @11:43AM (#5151399)
      Is it just me, or is this a description of a reverse lookup? How does that qualify as unnecessary?

      I believe that reverse lookups are identified by an "inverse" status flag in the request header. One can only assume that the authors were not counting this sort of valid query, and were only focusing on the "standard" queries that contained IP addresses. Those certainly would, I think, be rather pointless.

      • Re:Ignant (Score:3, Insightful)

        by pde ( 28299 )
        Is it just me, or is this a description of a reverse lookup? How does that qualify as unnecessary?

        I believe that reverse lookups are identified by an "inverse" status flag in the request header. One can only assume that the authors were not counting this sort of valid query, and were only focusing on the "standard" queries that contained IP addresses. Those certainly would, I think, be rather pointless.



        Ummm, no. "inverse" does not in any way shape or forme identify a request for the hostname associated with an IP address.

        And the lookups being described are not reverse loops, either. A 'reverse lookup' for 1.2.3.4 is a query for the PTR RR associated with 4.3.2.1.in-addr.arpa. The queries being described are for the A RR associated with the FQDN 1.2.3.4. There is no such TLD as '4.'

      • by Agent Green ( 231202 ) on Friday January 24, 2003 @01:12PM (#5152159)
        Reverse lookups go by sending a PTR request containing an IP address to a DNS server, versus a A request with a name as a snippet from this TCPdump shows a request from one my boxen to my DNS server:

        Reverse:

        12:59:31.814847 defender.licensedaemon > gimpy.domain: 20091+ PTR? 1.65.0.199.in-addr.arpa. (41)
        12:59:31.816003 defender.1029 > arrowroot.arin.net.domain: 19500 [b2&3=0x10] [1au] PTR? 1.65.0.199.in-addr.arpa. (52)

        Forward (complete request cycle from defender to gimpy):

        13:11:54.760484 defender.globe > gimpy.domain: 47604+ A? www.gtei.net. (30)
        13:11:54.761597 gimpy.1029 > dnsauth1.sys.gtei.net.domain: 51438 A? www.gtei.net. (30)
        13:11:54.977584 dnsauth1.sys.gtei.net.domain > gimpy.1029: 51438*- 1/3/3 A 128.11.42.31 (167) (DF)
        13:11:54.978626 gimpy.domain > defender.globe: 47604 1/3/0 A 128.11.42.31 (119)

        DNS & BIND is the first book to use for more info, though.
    • Re:Ign(or)ant (Score:5, Informative)

      by anticypher ( 48312 ) <anticypher@gm a i l .com> on Friday January 24, 2003 @12:07PM (#5151565) Homepage
      Its not just you, the two completely different DNS databases require different lookups, a common enough mis-understanding. Consider yourself less ignorant now :-)

      To do a reverse lookup, the resolver sends a different request type, asking for a PTR resource record. The form is to put the IP address (or network address) backwards, and append .in-addr.arpa to the request. All (well, ok, most) IPv4 addresses are mapped under the .in-addr.arpa domain. But these misconfigured resolvers are sending A (address) record requests but with a IP address included instead of a domain name.

      If you have your own DNS server and watch your DNS traffic, you can see these two effects happening differently.

      For a forward (A or MX record) lookup:

      Local server queries root server for an A record

      Root server responds with NS record for the registry of the domain

      Local server contacts registry server for A

      Registry server responds with NS records for the domain

      Local server contacts the domain's server, which responds with an A record

      Local server answers the resolver with the A record.

      For a reverse (PTR) lookup, the resolver traverses the netblock providers:

      Local server queries the root servers with a properly constructed PTR request (z.y.x.w.in-addr.arpa.)

      Root server knows only where major net blocks are allocated, and returns the NS record of a Regional Internet Registry (RIPE, APNIC, etc)

      Local server again queries an RIR NS with the PTR

      RIR NS knows which ISPs hold which blocks, so responds with the ISP NS record

      Local server again queries the ISP NS server, which either has the reverse hostname, or once again returns the NS record of the the local DNS server.

      The two different types of queries follow different paths, either Name Registries or Netblock Providers. This article points out that many resolvers are broken because they allow obvious reverse lookups to pass as forward lookups, and then can't deal with the resulting error messages.

      I have often seen broken resolvers repeatedly query DNS servers I manage, possibly because as the article points out, fucked firewalls allow the requests out, but block the requests from getting back to the resolver. It happens so much I just ignore it when I see it, its not worth notifying the admins because they are usually too clueless to know how to fix the problem.

      the AC

    • Is it just me, or is this a description of a reverse lookup? How does that qualify as unnecessary? This is a pretty common step in troubleshooting, and some software does a reverse lookup following a forward lookup to verify that the hostname it gets back is the same one it started with.

      I think they're talking about a forward DNS lookup with an IP address, which is indeed retarded.

      Forward DNS = resolving foo.com -> 12.34.56.78: this works by looking for an A record for foo.com; the A record contains the IP address.

      Reverse DNS = resolving 12.34.56.78 -> foo.com: this works by translating the IP address into a name (78.56.34.12.in-addr.arpa), then looking for a PTR record for that name, which will contain a hostname (foo.com).

      All domain names actually end in a period, that you usually don't see and don't use, for example "foo.com." or "78.56.34.12.in-addr.arpa."; the trailing dot stands for the root. The root nameservers are technically authoritative for "."; that's the definition of a root nameserver. So, what happens if you try to look up "12.34.56.78."? The dot means that's a FQDN, so you must be trying to do a forward lookup! Think of "78" as the TLD, "56" as the second-level domain, and look up "56.78" the same way you would look up "foo.com". There's no technical reason why a TLD couldn't be a number - alphanumeric characters (and hyphens) are allowed.

      So yeah, this is dumb, but it happens more often than you might think. I was going to write a tirade about stupid registrars creating bogus glue records, but I'm not awake enough to do so coherently, so I'll spare you.
  • Serious question (Score:5, Insightful)

    by Anonymous Coward on Friday January 24, 2003 @11:21AM (#5151251)
    This doesn't even take into account the 99.9% of web pages suck or are unnecessary anyways. This means that the remaining 2% of necessary DNS queries are probably not necessary either.

    I see this kind of thing all the time on /.--completely unedited, barely literate, rant-style submissions. Why don't the /. editors tone down or eliminate the rhetoric from submissions about otherwise worthy topics, or at least fix the grammar and typos?

    I know, I'm going to get blasted for saying this, but I'm convinced it's one of those "little things" that makes /. look to the rest of the world more like a bunch of know-nothing kids typing at each other than a group of technically literate activists with something of value to contribute.

    I now return you to your regularly scheduled rant...

    • There's a good reason why " /. look[s] to the rest of the world more like a bunch of know-nothing kids typing at each other than a group of technically literate activists with something of value to contribute."...

      The only contribution I make because of Slashdot is about $5000 annually to literacy organizations.
    • Last time I looked the idea was to collate interesting stories and articles from around the web and discuss them.

      I'm not trying to go high brow or anything, I really enjoy the in jokes and the strong opinions, theres nothing wrong with it.

      Its just I don't feel that a story submission should be full of personal opinion - thats up to the slashdotters to add in the comments where its subject to the ebb and flow of moderating - or am I missing something??
  • by jb_nizet ( 98713 ) on Friday January 24, 2003 @11:22AM (#5151261)
    About 12 percent of the queries received by the root server on Oct. 4, were for nonexistent top-level domains, such as ".elvis", ".corp", and ".localhost"

    Why don't DNS servers have a list of correct top-level domains, in order to answer directly, without going to a root server? The list is short, compared to the information the DNS server caches already, and the content of the list doesn't change so often. This list could be downloaded once in a day or so, from the DNS root servers.

    When packet filters and firewalls allow outgoing DNS queries, but block the resulting incoming responses, software on the inside of the firewall can make the same DNS queries over and over, waiting for responses that can't get through

    Why the hell does a firewall accept outgoing queries to black-listed domain names, if they are configured to block the response to these queries? This seems like a serious misconception to me.

    JB.

    • by dfn5 ( 524972 ) on Friday January 24, 2003 @11:36AM (#5151350) Journal
      Why don't DNS servers have a list of correct top-level domains, in order to answer directly, without going to a root server?

      This is actually an excellent idea and one that people who use opennic [opennic.org] do already. The root zone "." at OpenNIC is setup to be slaved so my DNS server downloads a copy of the root zone which has all the information for all the top level domains. If the root zones get DOSed I don't care because I don't use them anymore. Everyone should use OpenNIC. It is the Internet friendly thing to do. :)

  • So I guess there IS such a thing as a stupid question...
  • by dachshund ( 300733 ) on Friday January 24, 2003 @11:30AM (#5151312)
    About 12 percent of the queries received by the root server on Oct. 4, were for nonexistent top-level domains, such as ".elvis", ".corp", and ".localhost".

    And that's a problem? My understanding was dealing with this sort of thing was exactly the purpose of the root DNS servers. If every ISP's DNS server was pre-configured to recognize valid and invalid top-level domains, you could just set them up to go straight to the specific DNS servers handling those domains (.com, .net, .org, etc.) There would be no need for a root-level system.

    The argument for allowing this kind of cracked query through to the root server is that it makes it easy to add new domains (.elvis, .corp, what have you) without forcing everyone to reconfigure their DNS boxes for each new top-level domain.

    • by aridhol ( 112307 ) <ka_lac@hotmail.com> on Friday January 24, 2003 @11:48AM (#5151425) Homepage Journal
      Actually, according to RFC 2606 (Reserved Top Level DNS Names) [rfc-editor.org], .localhost can be blocked by the local DNS, as it is an invalid name (along with .test, .example, .invalid, and .example.(com|org|net)). These are supposed to be used for testing and documentation, so if they aren't in use, they may as well be blocked.
    • by alanjstr ( 131045 ) on Friday January 24, 2003 @11:54AM (#5151463) Homepage
      No, but the ISPs are supposed to query once a day or so and cache the results, so that the ROOT server isn't the DNS server that Everyone queries.
    • DNS2 (Score:4, Interesting)

      by emil ( 695 ) on Friday January 24, 2003 @12:16PM (#5151646)

      Really, we should have some sort of gnutella-like system for distributing zone files. The problem with DNS is that it was designed a LONG time ago before the more recent advances in P2P networks.

      There shouldn't be much argument at this point that we need DNS2 - the current system is vulnerable to attack.

      The problem is that, if you distribute zone files (or pieces of zone files) among a loosely-connected network, then you will need to establish trust. These zone files would have to be signed, and the certificate authority then becomes the bottleneck.

      It hurts my head.

  • Actually go deeper than that...what really needs to happen is a redesign of the underlying core of the whole damn thing...DNS, DHCP, and Routing need to be combined into a single protocol and server implimentation(already particially have that in DDNS)...but taken a step further(and I am being intentially light on details here, since its a huge subject) it would make the whole thing easier esspecially in todays world where everyone and thier brother has a web site (or other service) attached to their cable/DSL line, and they can't get a static IP and never mind getting IPs they migh own routed behind that IP to the rest of the world. One protocol that could publish IP/Domain Name/Routing for the whole shooting match through a rooted, treed and P2P system...(The root maintains order, tree allows clients to work backwards through the tree till they find the information they are looking for till they hit the root, the P2P moves updates around with sequence numbers probably in MD5 ro something to maintain chronology)...this is by no means the full idea, but might be a good seed....
    • by jefftp ( 35835 ) on Friday January 24, 2003 @12:16PM (#5151647)
      The fact that DNS, a 20 year old design, still works after being scaled several magnitudes beyond its original environment is proof that DNS doesn't need to be redesigned. The initial design is nothing short of genius. The extensions to the initial design (dynamic updates) build upon already solid technology.

      I run a DNS server, I've looked at DNS packets, and every time I ask the Internet to tell me who the heck slashdot.org is and it comes back with an IP address I'm amazed. My network asks strangers for help and those strangers say: Hey, try here. Bam! Slashdot.org pops up in my browser.

      You cannot "combine" DNS, DHCP, and Routing all into a single protocol. Hell, get three network engineers together sometime and try to get them to agree upon the best Internal Gateway Routing protocol sometime... EIGRP, OSPF, RIP.

      Routing information is extremely different from domain name information. The two have nothing in common other than IP Addresses. You have to include not only information about who your neighbors are, but also what type of links are between you and your neighbors, and how congested those links are. Now, what about your neighbor's neighbors? Oh, we'll track that to, and also keep a set of tables that show us the next two best reconfigurations should any of the links stop working. Unless you're just talking about RIP for routing.

      DHCP on the other hand is about getting clients configured for a network. They can then use DDNS to update their DNS record in a local DNS server. DHCP can do much more than just say: Here's your IP. It can also tell a client: here's where you should get your operating system from, and here's the voice over IP gateway, and here's the server where you should send your management info to, and here's the best local printer to use. Most people don't have clients that can handle that type of information, however.

      It's not just "if it's not broke, don't fix it" this is a case of "it frelling works great, keep your hands off of it or I'll kick you in the jimmy."
  • I see how the article describes the problem, especially

    "About 70 percent of all the queries were either identical, or repeat requests for addresses within the same domain."

    What I don't see is solid suggestions for improvement, except for indirect suggestions to name server operators to clean up their act. Perhaps the root servers could be made smarter, or buffered, so that the root servers cache the repeat requests and return a response before the root name server has to handle it. Maybe the root servers should just refuse to honor the most common unnecessary queries. That might set off alarms in the lower level DNS servers, which could get some real action across the board.
  • How about (Score:2, Interesting)

    by HD Webdev ( 247266 )
    1. Bad request received by a root server

    2. Root server notices it's of the 'non-existent top level domain' variety.

    3. Root server sends back information pointing to an ip that shows a web page with a nicer version of 'either you clicked a FrontPage created link, you are a monkey banging a banana on the keyboard, or your ISP administrators don't have a clue'.

    Advantages: It'll embarrass ISP's. It'll cut down on the traffic to the Root Servers.

    Disadvantages: It'll only be noticeable with web queries.
  • by MarkGriz ( 520778 ) on Friday January 24, 2003 @11:37AM (#5151359)
    How about coming up with a DNS Moderation system.
    The root servers give say 50 karma points to each IP address issuing a query.
    If the query is unnecessary, it gets modded "-1 redundant".
    When karma hits 0, it stops responding to further queries.
    DNS eventually stops working at that site, admin pulls head out of ass and fixes the problem causing the redundant DNS queries.
  • One factor... (Score:5, Informative)

    by ZoneGray ( 168419 ) on Friday January 24, 2003 @11:43AM (#5151395) Homepage
    One factor is that I suspect people are increasingly lowering their TTL's, expires, or whatever that parameter is. Most of the manage-it yourself DNS providers now allow an option toreduce that to a few minutes, which makes it much easier to move hosts around. And while a low setting increases DNS traffic, it rarely if ever incurs an extra cost to the domain holder.
    • That makes no sense...

      We're talking about the root nameserver here, not the server that handles .com. So if I lower the ttl on my domain, that increases the traffic to the server that handles .com and not the server that handles "."

      Basically, me lowering my ttl on my domain doesn't cause DNS servers to forget which machine is authoritative for all .coms
  • Original story... (Score:5, Informative)

    by Goodbyte ( 539941 ) on Friday January 24, 2003 @11:43AM (#5151400) Homepage
    It' seems this originally came from UCSD, so when the page gets /.:ed, here is another one: Original story [ucsd.edu], and the interesting pie-chart from original story [ucsd.edu].

    It obviously seems to be a lot of junk traffic, but the only part we can say is bad requests are part 3 and 4 from the chart. Bad spellings must go to the root since there may be such domains!

    It would be nice to analyze the 70% repeated or identical queries, probably lots of traffic can be explained for (or else there are a bunch of administrators out there who need a good manual on bind).

  • Scientists at the San Diego Supercomputer Center found that 98% of the remaining (necessary) DNS queries are related to porn websites!
  • 1. Most web users (and unfortunately lots of admins) don't understand DNS at a theory level also don't understand a lot of other stuff like security but...

    2. The amount of time it takes to set up DNS correctly and effeciently with the existing products, especially BIND, is a lot more than it takes to just get them functioning.

    3. The research would have been more interesting if they had gone and looked at say 1000 random requestors who where doing things screwed up and find out why and how they were screwed up.

    4. It would be nice if the local DNS servers had a list of valid top level domains so that it would kill requests to non-existant ones.

    THAT would be stuff that matters!

  • by FreeLinux ( 555387 ) on Friday January 24, 2003 @11:50AM (#5151434)
    I'm surprised that they did not mention massive numbers of "broken" requests from Windows 2000/XP systems. I see this all the time due to misconfigurations. Administrators often set up the Windows 2000 DNS servers incorrectly and Windows 2000/XP systems(workstations and servers) configured such that they constantly try dynamic DNS updates to the wrong DNS servers, even the root servers.

    Linux too, has some issues here. Obviously misconfigured DNS servers will always be a problem but, distros like Red Hat have IPv6 support compiled into the BIND RPM, this results in an IPv6 formatted query folllowed by an IPv4 query for every request.
  • Unnecessary Queries? (Score:3, Interesting)

    by Eskarel ( 565631 ) on Friday January 24, 2003 @11:53AM (#5151462)
    About 70 percent of all the queries were either identical, or repeat requests for addresses within the same domain. It is as if a telephone user were dialing directory assistance to get the phone numbers of certain businesses, and repeating the directory-assistance calls again and again.

    This is somewhat of an invalid metaphor for both the way dns works, and the way computer caching works. Pretty much every local DNS server(unless my information is wrong), has some sort of caching system of varying degrees of efficiency. The problem is that unlike humans who are more likely to remember things if they are repeated, caching usually just consists of a series of entries which can quite easily be overwritten, older entries will be overwritten if they aren't updated or caching would never work for new frequently accessed sites. It's quite easy to get an access pattern which would remove even the most frequently accessed files from a list especially on a server with a great deal of users. By providing different servers for each chunk of users you can diminish this problem but then you'll get requests from each server. DNS is an ugly system because it does and ugly job.

  • But I missed the search button in mozilla and sent out an invalid http request instead. But if I were serious it's nice that mozilla tries to quess what I wanted to do and generates a bunch of other invalid adresses.
  • Let's ReCap!! (Score:2, Interesting)

    by mcoko ( 464175 )
    If the First 98% are unnecessary and the last 2% are unnecessary as well...that's 100%...

    That means that you just explained and wished for the Internet to go away...

    or...you some how figured out how an end user can magically come up with the IP for a Host name from thin air. Go You. Your a Millionaire.
  • The Internet was shown to be a scale-free network [computerworld.com] by U. Notre Dame physicist Barabasi. It means that the majority of the Web Page Requests is only for a fraction of the total Web Pages (the 'hubs').
    Thus the 98% DNS Queries might be needed for only a minority of connections (I am assuming that Web Traffic is the bulk of Internet Traffic here).
  • by qix ( 22287 ) on Friday January 24, 2003 @01:15PM (#5152198)

    A DNS query for an IP address is a *BAD REQUEST* contrary to what some of these other posters have said. Asking a root server to resolve anything in the first place, is bad - they should only be asked for NS records - and in the second place, an IP address is not a valid domain name (unless ICANN has serripitiously added 256 new top level domains, namely, the numbers 0 thru 255).

    Most networks that I've seen, are badly broken this way. The usual problem is that the network in question may use private address space (192.168.1.0/24 for example), but fail to install reverse dns for these addresses, causing delays and other problems when machines try to get the name associated with their ip address or that of a local machine connecting to them. Yes, you heard right - if you use any of the 192.168.x.x, 10.x.x.x, or 172.16-32.x.x addresses, you are broken unless you install dns to resolve for those addresses! This also goes for any ip netblock in general, although most isp's these days are setting up dummy records for their unused ip space that'll cover their customers allocations ok.
  • by Linuxathome ( 242573 ) on Friday January 24, 2003 @01:30PM (#5152319) Homepage Journal
    Let me get this straight. This is a study of DNS conducted by CAIDA at SDSC at UCSD? I need a host list for these acronyms!
  • It's the SPAM (Score:3, Insightful)

    by haapi ( 16700 ) on Friday January 24, 2003 @02:04PM (#5152546)
    I'll bet a large percent of the queries, especially for bogus top-level domains, are due to lookups by MTAs when receiving SPAM. Think of the numbers!

    This doesn't mean that even these queries shouldn't be handled better -- just that SPAM lookups cause a bunch of 'em.

Klein bottle for rent -- inquire within.

Working...