
98% of DNS Queries at the Root Level are Unnecessary 435
LEPP writes "Scientists at the San Diego Supercomputer Centerfound that 98% of the DNS queries at the root level are unnecessary. This doesn't even take into account the 99.9% of web pages suck or are unnecessary anyways. This means that the remaining 2% of necessary DNS queries are probably not necessary either."
In other news (Score:5, Funny)
In related news (Score:4, Funny)
Re:In related news (Score:2, Insightful)
> 74.4% of all statistics are made up on the spot.
That's right! A complete study would probably have shown that first number to be much higher than 99%.
Heck, we're 3 for 3 so far
Re:In related news (Score:5, Funny)
Re:In related news (Score:5, Funny)
Re:In related news (Score:5, Funny)
Vatican (Score:5, Funny)
Re:In other news (Score:2)
for more about bad dns data look here [caida.org] what dns server get for stupid data.
Re:In other news (Score:3, Funny)
Highlight... (Score:2, Informative)
Now there's your 2 percenter right there!
Re:Highlight... (Score:5, Informative)
If the authors actually thought how the DNS works they would realise the reason for this. A DNS server that gets a request for .com will consult the root the first time and then cache the result. So even though the server might then get a million hits in .com it won't ask the root again.
If the server tries to query for a non existent domain it will get back a 'non-existent' response. Now it will cache that response for some time but the chances of getting a cache hit is actually pretty low.
So if you have a properly configured DNS with a bunch of web surfers that view 1 million pages in 20 TLDs and 1,000 bogus ones they will generate 20 hits they would classify as genuine and 1,000 that were 'unnecessary'.
That is how the system is meant to work.
The 70% of repeated requests are likely to include outright attacks as well as misconfigured DNS systems.
The problem dealing with these issues is that a DNS query is pretty cheap to handle, cheaper in fact than most of the proposed defenses. It is probably more expensive for a DNS server to check IPs against a blacklist than to just return the damn data...
Re:Highlight... (Score:2, Insightful)
Re:Highlight... (Score:5, Informative)
A way to tell would be to see how many of the queries were looking for mx records.
I suspect that people using dummy email addresses like 'a@b.c' for subscriptions are another major cause of the misfires.
The browsers doing search from the address bar probably reduces the number of misfires. A modern browser will only go to DNS if it sees something like foo.bar. If it just sees foo it will typically try foo.com and then go bang a search engine.
Another reason I suspect spam is a major issue in the misfires is that lots of spam filters do lookup on sender addresses and those frequently point to non existent domains. Also the spam senders rarely do the most basic filtering on their lists - you can tell that since every now and again you get a spam with a full sender list at the top and you can see the broken addresses right there.
Re:Highlight... (Score:5, Informative)
Well, that's the theory. In practice, however, there are millions of servers out there that do not cache NXDOMAIN at all, and just keep querying, over and over and over again, for TLDs that they've already been told don't exist. Microsoft's name server has been known to do this.
At one point, f.gtld-servers.net was seeing millions of repeated queries per hour from the same two .mil servers asking the same question and refusing to accept the NXDOMAIN. For long periods, these two servers were asking the same question multiple times per click of F's timer. That's.. ummm.. Bad.
I suggest that you read the actual CAIDA paper, and the other papers on the subject that Evi Nemeth and others at CAIDA have produced. They *have* thought about how the DNS actually works in practice. You've only thought about how it would work if every implementation worked perfectly, according to your expectations.
Re:Highlight... (Score:3, Informative)
That is hardly suprising since a lot of servers don't even cache the positive hits.
The report said 70% of the hits were repeated requests. Again this is not too suprising, the root zone caches really well. There are less than 200 domains after all and only 20 of those have a significant degree of activity. The TLD configurations change so infrequently that the TTL could be set at a month without inconveniencing anyone.
So the 'necessary' traffic for the root servers is negligible. Even with a million odd DNS servers out there each root need see no more than a few tens of thousands of hits an hour.
It makes no real difference since the roots have to be scaled to be able to survive a sustained DDoS attack for at least as long as it takes remediation measures to kick in. Get rid off all the bozo queries and you still need the same size box because of the script kiddies.
There are a bunch of changes that could be put in place that would reduce the DDoS problem. First we could follow the proposals of Mark Kosters and Paul Vixie to start using anycast (this looks like it is going ahead).
Another thing we could do is to change the DNS logic so that servers keep records in their cache beyond the TTL and use those as backup if the root or TLD is unavailable. Then even a DDoS that succeeded would have only marginal effect.
Fess up... (Score:5, Funny)
Actually, I've always had a theory that Microsoft coined ".msn" because they wanted to get their own top level domain.
So... they looked at one server.. (Score:2, Interesting)
AOL (Score:5, Interesting)
Re:AOL (Score:5, Interesting)
I can't count the number of times I've seen a massive spike in number of "unique visitors" just to look at the hosts and find *.proxy.aol.com filling the whole thing....
Re:AOL (Score:3, Interesting)
Re:AOL (Score:4, Informative)
it is common knowledge that aolusers come through aol's proxies, and the proxy hostnames contain proxy in them, so it should be fairly obvious
also, anybody who is running web statistics should know the following things:
1) web statistics are inaccurate
2) proxies screw up web statistics
3) not all proxies are visible
4) refer to 1 and 2
It's a wash (Score:2, Interesting)
Re:AOL (Score:2, Interesting)
Usually, the most important data is the first page hit. WHICH PAGE/SITE REFERRED THE PERSON TO THIS WEB SITE? In most cases, where the person is connecting from is not nearly important as where they found the link.
An ecommerce example: When showing site statistics, I advise my ecommerce clients to put their money in the referring sites that yield the highest 'bought a product' ratio.*
Once in a while, the client will be awed by the AOL total hits statistics and want to put money there. I then explain that they will most likely increase their bandwidth use with little return and have to pay AOL for the privilege.
A site that depends on banner ads example:
Put money in referrer sites where the referred person viewed the most pages AND clicked the most ads per person. Accurate statistics for that are easy with PHP [php.net] scripting (or your language of choice). Bonus points for using a script that counts returning visitors and compares that to where they were originally referred from.
* It has crossed my mind that I could be mean/funny and generate 'how many attempts it takes an AOL user to fill out a form correctly' statistics.
Re:AOL (Score:2, Informative)
Re:AOL (Score:3, Interesting)
Re:AOL (Score:3, Interesting)
S
Re:AOL (Score:5, Insightful)
As an ISP, why not respect a TTL? Because many DNS zones set their TTL values too small (5 mins), when 24 hours, would accomplish the same thing (except in rare circumstances -- if you're planning on moving, set it low a week before, do the move, reset to high ttl).
As a DNS administrator, it's a pain to keep changing your TTL and the ISPs don't respect it anyway.
It's useless to have a low TTL because the ISPs generally don't respect it because it's generally set too low because the ISPs don't generally respect it because....
S
That's not the same problem... (Score:5, Informative)
I can suddenly see lots of slashdot users thinking-- oh, I should fix my firewall, I have all these DNS requests; but that's normal operation for a client workstation. Your firewall would be broken only if all your DNS queries failed, and you'd know it pretty fast if that were the case.
Re:AOL (Score:2)
But does all the proxies do those lookups on different DNS servers? I doubt it. You could have a large number of proxies using just a single DNS server, although you would probably use two or three for redundancy. But the redundant servers could still query each other first and only outside if the result was missing or the other server was down. And then again you shouldn't need that many TLD lookups almost no matter how stupid you do it.
DNS queries are for lamers (Score:2, Funny)
Re:DNS queries are for lamers (Score:3, Funny)
bbh
Re:DNS queries are for lamers (Score:4, Interesting)
What with the private subnets you can't get to, and coorporations buying up whole class IP blocks, you're not going to need to map every single IP to a set of names.
Say you need to map 2**30 names. Give each name 256 bytes to list the hosts using that ip. You've just used 256GB. Alot, yes, but I'm willing to bet at least one person reading this has that much storage dedicated to MP3s.
Re:DNS queries are for lamers (Score:2)
It may very well be faster to use DNS.
dild@tr0n
Re:DNS queries are for lamers (Score:3, Interesting)
Re:DNS queries are for lamers (Score:3, Informative)
> Say you need to map 2**30 names. Give each name 256 bytes to list the hosts using that ip
I meant to write:
Say you need to map 2**30 addresses. Give each address 256 bytes to list the host names using that ip
ok. So 256 is an arbitrary limit. However, assuming that most addresses wont use all 256 bytes, you should be able to borrow some bytes from lines that aren't using 'em. The whole thing was meant as a ballpark estimate. I made up the assumption that only 1 IP in 4 is named, too. Give it a few years, and I won't need to make these assumptions either.
Re:DNS queries are for lamers (Score:2, Funny)
Oh yea, well... I do a daily backup for Gopher to punch cards.
.elvis? (Score:2, Funny)
Re:.elvis? (Score:2, Funny)
Re:.elvis? (Score:2, Funny)
Re:.elvis? (Score:3, Funny)
Re:.elvis? (Score:2, Funny)
Yes, but would sites in .emacs only be available to broadband users?
Good thinking, Sparky. (Score:2, Funny)
98% of the DNS queries at the root level are unnecessary. [...] This means that the remaining 2% of necessary DNS queries are probably not necessary either."
Uhh... right, eliminate 100% of the root queries, they aren't needed..
sheeeesh..
Badly written (Score:2, Insightful)
Just my 2p/2 worth.
Why... (Score:5, Insightful)
Re:Why... (Score:5, Informative)
Excellent point, and I hope whomever has modpoints today will mod the parent up. Your PC is a sieve of information even with nothing more than a web browser and E-mail client. When you install IM applications or, gods forbid, file-sharing applications like KaZaa, the sieve becomes a fount.
I've made a couple other posts regarding this in the past week or so, to point out that most applications don't need access to port 80, for example. E-mail doesn't need it, and IM programs certainly don't need it. ICQ uses a port in the 400 range somewhere, IIRC, for its message traffic; but it uses port 80 to report usage statistics to Mirabilis and to download banners. So does it really need port 80? Nope--you can save yourself bandwidth and gain privacy by blocking it.
The list goes on, of course; but my biggest gain from firewalling my PC has been the freedom to restrict outgoing traffic.
No wonder these servers have so many problems (Score:5, Funny)
It's no wonder these servers have so many problems - there's thirteen of them! They need a lucky #14 - a Bilbo Baggins for their horde of dwarves. That'll stop those DoS attacks and unnecessary requests right away!
Re:No wonder these servers have so many problems (Score:3, Funny)
It's wrong. The correct is
"One server to BIND them all!"
99.9% (Score:5, Insightful)
What standard is this based on? My website wite sucks and is only necessary for my own amusement but it is similar to my favorite kind of sites on the web. I would use the web a lot less if it wasn't for those 99.9% of web sites. Most blogs, for instance, suck and are unnecessary but at the same time the total of all the blogs is having a big impact on news outlets and the media.
Re:99.9% (Score:2, Funny)
News you can use (Score:5, Interesting)
"Researchers believe that many bad requests occur because organizations have misconfigured packet filters and firewalls, security mechanisms intended to restrict certain types of network traffic. When packet filters and firewalls allow outgoing DNS queries, but block the resulting incoming responses..."
It's nice to see a story with info I can take and use. This is actually "stuff that matters".
Kudos to the researchers, and now I am off to check my firewall.
Re:News you can use (Score:2, Interesting)
Re:News you can use (Score:3, Insightful)
With a purchased firewall, especially if you can't edit it yourself, I would have to assume (uh oh) that the manufacturer got the basics right, at least. I really don't know of a way to check those. You could try an online port scan like sygate.com offers. But your firewall is probably using a "statefull" method, which would allow DNS to come back if you initiated the request, but it would block a NEW request that originated outside. So it will probably say your port 53 is blocked.
Re:News you can use (Score:5, Informative)
I crazy about my home network firewall configuration, and when it is under my authority, the firewall rules of the business to which I am employed at any time.
An important but often left out part of a firewall's configuration is logging. Attempts to do things that should never be done should not just be dropped, they should be logged and then brought to your attention.
Some examples;
If your local network is 192.168.2.128/29 then any outgoing packet that does not have a source within the range of 192.168.2.129 and 192.168.2.134 should be dropped AND logged. Someone on YOUR network is either stupid or trying to spoof someone!
The same thing goes for ports and protocols that should not be outgoing on your network.
Okay, so getting probed on TCP 80 is getting annoying now that you are logging everything that is not allowed. Fine, explicitly drop it without logging.
Conform to RFC1918 -- don't route IP private space to or from the Internet. Route it to
Also, conform to BCP38 ftp://ftp.rfc-editor.org/in-notes/bcp/bcp38.txt
After tuning your firewall logging filters, you will find that when new attacks occur or something is up, you notice. Otherwise, you are blind and dumb to what your firewall is doing, which means that you are blind and dumb to what your network is doing.
Ignant (Score:5, Interesting)
Is it just me, or is this a description of a reverse lookup? How does that qualify as unnecessary? This is a pretty common step in troubleshooting, and some software does a reverse lookup following a forward lookup to verify that the hostname it gets back is the same one it started with.
Chuckles
Re:Ignant (Score:2, Insightful)
They mean that some software, designed to take a fully qualified domain name as input, *always* looks up the input by DNS, even if someone has typed in an IP rather than a hostname - making the lookup unnecessary.
If it was a reverse lookup it wouldn't just contain an ip (e.g. "1.2.3.4"), it would be "4.3.2.1.in-addr.arpa", that's how reverse lookup works.
Re:Ignant (Score:5, Informative)
I believe that reverse lookups are identified by an "inverse" status flag in the request header. One can only assume that the authors were not counting this sort of valid query, and were only focusing on the "standard" queries that contained IP addresses. Those certainly would, I think, be rather pointless.
Re:Ignant (Score:3, Insightful)
I believe that reverse lookups are identified by an "inverse" status flag in the request header. One can only assume that the authors were not counting this sort of valid query, and were only focusing on the "standard" queries that contained IP addresses. Those certainly would, I think, be rather pointless.
Ummm, no. "inverse" does not in any way shape or forme identify a request for the hostname associated with an IP address.
And the lookups being described are not reverse loops, either. A 'reverse lookup' for 1.2.3.4 is a query for the PTR RR associated with 4.3.2.1.in-addr.arpa. The queries being described are for the A RR associated with the FQDN 1.2.3.4. There is no such TLD as '4.'
Re:Ignant...you've got it all wrong. (Score:5, Informative)
Reverse:
12:59:31.814847 defender.licensedaemon > gimpy.domain: 20091+ PTR? 1.65.0.199.in-addr.arpa. (41)
12:59:31.816003 defender.1029 > arrowroot.arin.net.domain: 19500 [b2&3=0x10] [1au] PTR? 1.65.0.199.in-addr.arpa. (52)
Forward (complete request cycle from defender to gimpy):
13:11:54.760484 defender.globe > gimpy.domain: 47604+ A? www.gtei.net. (30)
13:11:54.761597 gimpy.1029 > dnsauth1.sys.gtei.net.domain: 51438 A? www.gtei.net. (30)
13:11:54.977584 dnsauth1.sys.gtei.net.domain > gimpy.1029: 51438*- 1/3/3 A 128.11.42.31 (167) (DF)
13:11:54.978626 gimpy.domain > defender.globe: 47604 1/3/0 A 128.11.42.31 (119)
DNS & BIND is the first book to use for more info, though.
Re:Ign(or)ant (Score:5, Informative)
To do a reverse lookup, the resolver sends a different request type, asking for a PTR resource record. The form is to put the IP address (or network address) backwards, and append
If you have your own DNS server and watch your DNS traffic, you can see these two effects happening differently.
For a forward (A or MX record) lookup:
Local server queries root server for an A record
Root server responds with NS record for the registry of the domain
Local server contacts registry server for A
Registry server responds with NS records for the domain
Local server contacts the domain's server, which responds with an A record
Local server answers the resolver with the A record.
For a reverse (PTR) lookup, the resolver traverses the netblock providers:
Local server queries the root servers with a properly constructed PTR request (z.y.x.w.in-addr.arpa.)
Root server knows only where major net blocks are allocated, and returns the NS record of a Regional Internet Registry (RIPE, APNIC, etc)
Local server again queries an RIR NS with the PTR
RIR NS knows which ISPs hold which blocks, so responds with the ISP NS record
Local server again queries the ISP NS server, which either has the reverse hostname, or once again returns the NS record of the the local DNS server.
The two different types of queries follow different paths, either Name Registries or Netblock Providers. This article points out that many resolvers are broken because they allow obvious reverse lookups to pass as forward lookups, and then can't deal with the resulting error messages.
I have often seen broken resolvers repeatedly query DNS servers I manage, possibly because as the article points out, fucked firewalls allow the requests out, but block the requests from getting back to the resolver. It happens so much I just ignore it when I see it, its not worth notifying the admins because they are usually too clueless to know how to fix the problem.
the AC
Re:Ignant (Score:2)
I think they're talking about a forward DNS lookup with an IP address, which is indeed retarded.
Forward DNS = resolving foo.com -> 12.34.56.78: this works by looking for an A record for foo.com; the A record contains the IP address.
Reverse DNS = resolving 12.34.56.78 -> foo.com: this works by translating the IP address into a name (78.56.34.12.in-addr.arpa), then looking for a PTR record for that name, which will contain a hostname (foo.com).
All domain names actually end in a period, that you usually don't see and don't use, for example "foo.com." or "78.56.34.12.in-addr.arpa."; the trailing dot stands for the root. The root nameservers are technically authoritative for "."; that's the definition of a root nameserver. So, what happens if you try to look up "12.34.56.78."? The dot means that's a FQDN, so you must be trying to do a forward lookup! Think of "78" as the TLD, "56" as the second-level domain, and look up "56.78" the same way you would look up "foo.com". There's no technical reason why a TLD couldn't be a number - alphanumeric characters (and hyphens) are allowed.
So yeah, this is dumb, but it happens more often than you might think. I was going to write a tirade about stupid registrars creating bogus glue records, but I'm not awake enough to do so coherently, so I'll spare you.
Serious question (Score:5, Insightful)
I see this kind of thing all the time on /.--completely unedited, barely literate, rant-style submissions. Why don't the /. editors tone down or eliminate the rhetoric from submissions about otherwise worthy topics, or at least fix the grammar and typos?
I know, I'm going to get blasted for saying this, but I'm convinced it's one of those "little things" that makes /. look to the rest of the world more like a bunch of know-nothing kids typing at each other than a group of technically literate activists with something of value to contribute.
I now return you to your regularly scheduled rant...
Re:Serious question (Score:2)
The only contribution I make because of Slashdot is about $5000 annually to literacy organizations.
Re:Serious question (Score:3)
I'm not trying to go high brow or anything, I really enjoy the in jokes and the strong opinions, theres nothing wrong with it.
Its just I don't feel that a story submission should be full of personal opinion - thats up to the slashdotters to add in the comments where its subject to the ebb and flow of moderating - or am I missing something??
Incorrect top-level domains (Score:5, Interesting)
Why don't DNS servers have a list of correct top-level domains, in order to answer directly, without going to a root server? The list is short, compared to the information the DNS server caches already, and the content of the list doesn't change so often. This list could be downloaded once in a day or so, from the DNS root servers.
When packet filters and firewalls allow outgoing DNS queries, but block the resulting incoming responses, software on the inside of the firewall can make the same DNS queries over and over, waiting for responses that can't get through
Why the hell does a firewall accept outgoing queries to black-listed domain names, if they are configured to block the response to these queries? This seems like a serious misconception to me.
JB.
Re:Incorrect top-level domains (Score:5, Informative)
This is actually an excellent idea and one that people who use opennic [opennic.org] do already. The root zone "." at OpenNIC is setup to be slaved so my DNS server downloads a copy of the root zone which has all the information for all the top level domains. If the root zones get DOSed I don't care because I don't use them anymore. Everyone should use OpenNIC. It is the Internet friendly thing to do. :)
OpenNIC (OffTopic) (Score:2)
As long as they don't support new.net, I'm switching over
Another Truism Smashed... (Score:2, Offtopic)
Not really "broken" queries (Score:5, Interesting)
And that's a problem? My understanding was dealing with this sort of thing was exactly the purpose of the root DNS servers. If every ISP's DNS server was pre-configured to recognize valid and invalid top-level domains, you could just set them up to go straight to the specific DNS servers handling those domains (.com, .net, .org, etc.) There would be no need for a root-level system.
The argument for allowing this kind of cracked query through to the root server is that it makes it easy to add new domains (.elvis, .corp, what have you) without forcing everyone to reconfigure their DNS boxes for each new top-level domain.
Re:Not really "broken" queries (Score:5, Informative)
Re:Not really "broken" queries (Score:4, Informative)
DNS2 (Score:4, Interesting)
Really, we should have some sort of gnutella-like system for distributing zone files. The problem with DNS is that it was designed a LONG time ago before the more recent advances in P2P networks.
There shouldn't be much argument at this point that we need DNS2 - the current system is vulnerable to attack.
The problem is that, if you distribute zone files (or pieces of zone files) among a loosely-connected network, then you will need to establish trust. These zone files would have to be signed, and the certificate authority then becomes the bottleneck.
It hurts my head.
Re:Not really "broken" queries (Score:2)
I do this all the time too. I wish I had a seperate Google search box, but I'm too lazy to keep installing whatever add-on would give it to me every time I upgrade Mozilla, and besides it would waste vertical space.
And yes, you're right, this is one of the causes of those bogus queries.
DNS Needs a redesign.... (Score:2, Interesting)
Re:DNS Needs a redesign.... (Score:5, Insightful)
I run a DNS server, I've looked at DNS packets, and every time I ask the Internet to tell me who the heck slashdot.org is and it comes back with an IP address I'm amazed. My network asks strangers for help and those strangers say: Hey, try here. Bam! Slashdot.org pops up in my browser.
You cannot "combine" DNS, DHCP, and Routing all into a single protocol. Hell, get three network engineers together sometime and try to get them to agree upon the best Internal Gateway Routing protocol sometime... EIGRP, OSPF, RIP.
Routing information is extremely different from domain name information. The two have nothing in common other than IP Addresses. You have to include not only information about who your neighbors are, but also what type of links are between you and your neighbors, and how congested those links are. Now, what about your neighbor's neighbors? Oh, we'll track that to, and also keep a set of tables that show us the next two best reconfigurations should any of the links stop working. Unless you're just talking about RIP for routing.
DHCP on the other hand is about getting clients configured for a network. They can then use DDNS to update their DNS record in a local DNS server. DHCP can do much more than just say: Here's your IP. It can also tell a client: here's where you should get your operating system from, and here's the voice over IP gateway, and here's the server where you should send your management info to, and here's the best local printer to use. Most people don't have clients that can handle that type of information, however.
It's not just "if it's not broke, don't fix it" this is a case of "it frelling works great, keep your hands off of it or I'll kick you in the jimmy."
Where's the fix? (Score:2)
"About 70 percent of all the queries were either identical, or repeat requests for addresses within the same domain."
What I don't see is solid suggestions for improvement, except for indirect suggestions to name server operators to clean up their act. Perhaps the root servers could be made smarter, or buffered, so that the root servers cache the repeat requests and return a response before the root name server has to handle it. Maybe the root servers should just refuse to honor the most common unnecessary queries. That might set off alarms in the lower level DNS servers, which could get some real action across the board.
How about (Score:2, Interesting)
2. Root server notices it's of the 'non-existent top level domain' variety.
3. Root server sends back information pointing to an ip that shows a web page with a nicer version of 'either you clicked a FrontPage created link, you are a monkey banging a banana on the keyboard, or your ISP administrators don't have a clue'.
Advantages: It'll embarrass ISP's. It'll cut down on the traffic to the Root Servers.
Disadvantages: It'll only be noticeable with web queries.
DNS Moderation (Score:5, Funny)
The root servers give say 50 karma points to each IP address issuing a query.
If the query is unnecessary, it gets modded "-1 redundant".
When karma hits 0, it stops responding to further queries.
DNS eventually stops working at that site, admin pulls head out of ass and fixes the problem causing the redundant DNS queries.
One factor... (Score:5, Informative)
Re:One factor... (Score:2, Insightful)
We're talking about the root nameserver here, not the server that handles
Basically, me lowering my ttl on my domain doesn't cause DNS servers to forget which machine is authoritative for all
Original story... (Score:5, Informative)
It obviously seems to be a lot of junk traffic, but the only part we can say is bad requests are part 3 and 4 from the chart. Bad spellings must go to the root since there may be such domains!
It would be nice to analyze the 70% repeated or identical queries, probably lots of traffic can be explained for (or else there are a bunch of administrators out there who need a good manual on bind).
In other news ... (Score:2, Funny)
Lets go a level deeper (Score:2, Insightful)
2. The amount of time it takes to set up DNS correctly and effeciently with the existing products, especially BIND, is a lot more than it takes to just get them functioning.
3. The research would have been more interesting if they had gone and looked at say 1000 random requestors who where doing things screwed up and find out why and how they were screwed up.
4. It would be nice if the local DNS servers had a list of valid top level domains so that it would kill requests to non-existant ones.
THAT would be stuff that matters!
Another great source for broken DNS. (Score:5, Interesting)
Linux too, has some issues here. Obviously misconfigured DNS servers will always be a problem but, distros like Red Hat have IPv6 support compiled into the BIND RPM, this results in an IPv6 formatted query folllowed by an IPv4 query for every request.
Unnecessary Queries? (Score:3, Interesting)
This is somewhat of an invalid metaphor for both the way dns works, and the way computer caching works. Pretty much every local DNS server(unless my information is wrong), has some sort of caching system of varying degrees of efficiency. The problem is that unlike humans who are more likely to remember things if they are repeated, caching usually just consists of a series of entries which can quite easily be overwritten, older entries will be overwritten if they aren't updated or caching would never work for new frequently accessed sites. It's quite easy to get an access pattern which would remove even the most frequently accessed files from a list especially on a server with a great deal of users. By providing different servers for each chunk of users you can diminish this problem but then you'll get requests from each server. DNS is an ugly system because it does and ugly job.
I tried to google! (Score:2)
Let's ReCap!! (Score:2, Interesting)
That means that you just explained and wished for the Internet to go away...
or...you some how figured out how an end user can magically come up with the IP for a Host name from thin air. Go You. Your a Millionaire.
But is this really a problem ? (Score:2, Interesting)
Thus the 98% DNS Queries might be needed for only a minority of connections (I am assuming that Web Traffic is the bulk of Internet Traffic here).
Wait a minute, you don't understand the artical (Score:4, Informative)
A DNS query for an IP address is a *BAD REQUEST* contrary to what some of these other posters have said. Asking a root server to resolve anything in the first place, is bad - they should only be asked for NS records - and in the second place, an IP address is not a valid domain name (unless ICANN has serripitiously added 256 new top level domains, namely, the numbers 0 thru 255).
Most networks that I've seen, are badly broken this way. The usual problem is that the network in question may use private address space (192.168.1.0/24 for example), but fail to install reverse dns for these addresses, causing delays and other problems when machines try to get the name associated with their ip address or that of a local machine connecting to them. Yes, you heard right - if you use any of the 192.168.x.x, 10.x.x.x, or 172.16-32.x.x addresses, you are broken unless you install dns to resolve for those addresses! This also goes for any ip netblock in general, although most isp's these days are setting up dummy records for their unused ip space that'll cover their customers allocations ok.
Acronym soup (Score:3, Funny)
It's the SPAM (Score:3, Insightful)
This doesn't mean that even these queries shouldn't be handled better -- just that SPAM lookups cause a bunch of 'em.
Re:The real root of the problem... (Score:4, Interesting)
The problem with DNS (and SMTP) is that they are protocols developed during a time where everyone on the internet was operating in a cooperative mode. Now that there is a proliferation of SPAM and DOS attacks, these old protocols break down because they were not developed with security in mind.
DNS will not go away. But the protocol will probably change at some point.
Re:The real root of the problem... (Score:2)
Oh well. The avalance has started, it is too late for the pebbles to vote.
Re:The real root of the problem... (Score:5, Funny)
Huh? IPv6 a cure for DNS? (Score:5, Funny)
Huh?
Maybe I've been asleep at the wheel when it comes to all of the advantages of IPv6, but how on earth does it alleviate the need for a functioning DNS service?
Do you imagine that it will somehow be easier for people to remember IP addresses that are 128 bits in length than it is to remember them in their current 32 bit dotted decimal form?
I guess these will be what we have to look forward to in your DNS-free world of the future:
Riiiight.
Re:Huh? IPv6 a cure for DNS? (Score:3, Funny)
3ffe:10d7:::dead:beef:cafe:babe
and
2001:234d
what exciting words and numbers can you come up with 1234567890ABCDEF!
Re:!!! Incoming News Flash !!! (Score:2, Troll)
But the remaining 0.01% of monekys need DNS to communicate using the Infinite Monkey Protocol Suite [ietf.org]
Re:Oh, give me a break... (Score:3, Insightful)
You aren't. What you described is how it should happen. But once you have the NS for dot-org, which you received in the reply to your first query for slashdot.org, you need not go back to the roots with any dot-org query name until the NS records expire from your cache.
If your nameserver repeatedly hits the roots for dot-org names, even after it has received the dot-org NS records, that is broken and those queries are unnecessary. Your nameserver should be hitting the dot-org servers, not the roots.
Re:Oh, give me a break... (Score:3, Informative)
This is no longer the case, and I can't paste an example because of Slashdot's LAME lameness filter. But run this command:
dig @a.root-servers.net slashdot.org mx
You will getback a delegation to the
This is pathetic and typical of /. (Score:4, Informative)
There is nothing malformed about a
The elegant part of the design was to define the protocol to look up unknown TLDs and unrecognized TLDs at the roots. It didn't anticipate a few million monkeys typing search terms into browser address lines.
The fault for the excess lookups lies in the applications programmers.
Re:Bad analysis logic? (Score:3, Informative)
DNS servers need to access the root servers once for each TLD, unlike the record expires, from then on they should be accessing the .com server or whatever. That's how they're counting 'redundent' requests, a server asking 'where's the .net server?' when it just asked that question ten minutes ago.
The people doing the stury neither know nor care what IP address it was doing the request, just that it happened more than once for the same record from the same IP within a certain amount of time.
Granted, firewalls and whatnot may screw this up, but I have to suggest that multiple people behind a single firewall should not be running multiple DNS servers anyway, that's just a waste of bandwidth and pretty stupid.