Securing DNS From The Roots Up 354
jeffy124 writes: "This article at ComputerWorld tells the story of how ICANN would like to replace the root DNS systems with secured servers. Lars-Johan Liman, one of the root operators, spoke about the concept at ICANN's annual meeting today. He discussed how the world's current redundant DNS system is vulnerable to DDOS attacks and yet-to-be-discovered root holes in bind that can ultimately undermine the entire Internet by taking away the name-IP mappings that are relied upon by just about everyone."
News flash! (Score:5, Funny)
Some people just never get the news....
Re:News flash! (Score:2)
Remember the hole in BIND from the beginning of this year? Big as a truck? If I recall correctly it was a TSIG related buffer overflow that made it possible to run code at the same priviledge as BIND (often root)...
I never understood why BIND would run as root -- it will drop privs once it attches to port 53. Is this just another case of lazy admin?
My entire world is running amok (Score:4, Funny)
Re:My entire world is running amok (Score:2)
Um, that was just as funny as the parent comment, which was moderated up. Oh well.
And once you lock those down... (Score:3, Insightful)
Why still running on BIND? (Score:5, Interesting)
Re:Why still running on BIND? (Score:2)
There probably aren't a lot developed, and you don't hear about many of them that are developed because it's not a very sexy application, like say, napster.
Re:Why still running on BIND? (Score:5, Interesting)
Re:Why still running on BIND? (Score:5, Informative)
Read his license [cr.yp.to] and see for yourself.
Re:Why still running on BIND? (Score:2, Informative)
"You can distribute your patches for other people to use."
"You may distribute a precompiled package if installing your package produces exactly the same files, in exactly the same locations, that a user would obtain by installing one of my packages listed above"
"You may distribute exact copies of any of the following packages"
It seems DJB is A-OK with the raw source or raw binaries being on the CD. He's also OK with patches and patch distribution.
So here's how to "get around" his license if you are a distributor (which isn't all that bad except for distributors, who generally have better things to deal with than "attitude" -- which it appears DJB has against software distributors, and is why his software is doomed to fail in the market, even if it won't fail in the computer):
- djbdns
[ ] Install virgin djbdns binary
[ ] Install virgin djbdns source
- patches
[ ] Install binary patch for djbdns for correct operation with your platform
[ ] Install source patch for djbdns for correct operation with your platform
Problem solved. He's happy, you're happy, I'm happy. We're all a happy family. Well, maybe DJB isn't happy, but then again maybe he should get back to coding more good software rather than writing software licenses (which it appears he isn't particularly good at... I'm no legal expert and I saw that gaping hole a mile away).
Can't we just all get along?
[IANAL. If you wanted legal advice, you looked at the wrong post.]
Re:Why still running on BIND? (Score:2)
I thought so too once but then I watched him bang his ahead against the wall on one of the OpenBSD mailing lists.
Repeatedly.
Someone who hits their head that often surely has a few problems...
djbdns has NO license (Score:2, Informative)
The only appearance of the word license on that page is in a quote from a RedHat employee, not DJB. It would seem impossible to me to grant a license without specifically stating that you are granting a license.
The inability to change pathnames is a bunch of hooey. I've seen packages included with a major distribution that could have been modified to use paths that make more sense, but have been packaged with the author's defaults instead.
xjosh
DJBDNS doesn't obey many RFC's, not OSS either (Score:5, Insightful)
You can't do zone transfers using djbdns for one thing. DJB thinks that zone transfers are evil, and has his own method for doing the task (rsync over ssh I believe), but whether they're evil or not is beside the point. Like it or not, zone transfers are a part of the core DNS protocols and any proper successor to BIND must implement them all. Starting a standards war with the IETF is not something I want to have along with a name server I deploy. Let Bernstein write an RFC for publication describing his idiosyncratic methods and get the IETF to ratify it as a core standard if he wants, if he truly thinks his way is the better way. The way he operates reminds me more of the way Microsoft handles standards than anything else.
Besides, djbdns is also deficient in a far more important way (for me and to a lot of people here on Slashdot anyhow, I hope): it's actually proprietary software with a limited license for gratis use. It's not Free Software or even Open Source, not by any reasonable definition of the term. There is no license along with his programs, and absent a license you have NO RIGHT to share, study, or change Bernstein's code!
axfrdns, part of djbdns does zone transfers (Score:3, Informative)
http://cr.yp.to/djbdns/axfrdns.html
this supports outgoing transfers. Incoming are a possible security risk (NO authentication happens in most cases, other than IP address checking, IIRC), making this a prudent decision, IETF or no.
BBK
Re:DJBDNS doesn't obey many RFC's, not OSS either (Score:2)
You can't do zone transfers using djbdns for one thing. DJB thinks that zone transfers are evil, and has his own method for doing the task (rsync over ssh I believe)...
The point he's trying to make there is that there is already an easy, secure way to transfer data over the Internet. Rather than inventing a new protocol (which may have bugs), why not just use existing tools that are probably already on your system. (I use rsync and ssh all the time for other tasks.)
The "DJB Way" is to make small modular programs, which is very much in the original Unix tradition. The power of this comes in the ability to quickly create new applications out of exisiting components.
Re:DJBDNS doesn't obey many RFC's, not OSS either (Score:2)
But even BIND deviates from the standard written to it's implementation in many places. See http://cr.yp.to/djbdns/notes.html [cr.yp.to] for some examples.
There is no license along with his programs, and absent a license you have NO RIGHT to share, study, or change Bernstein's code!
You didn't supply a license with your post, so may I presume your attorneys will be contacting me for quoting part of your silly diatribe?
Re:Why still running on BIND? (Score:2)
Except that djbdns is no way a drop in replacement for BIND.
Re:Why still running on BIND? (Score:2)
Re:Why still running on BIND? (Score:2, Informative)
Yes, there are independantly coded closed-source solutions which perform better and presumably are better all around (Nominum [nominum.com] has written a program that they use as the basis for their Global Name Service, which does not contain any code from BIND).
However, these are closed-source implementations, and the folks who operate the various root servers are doing so on a volunteer basis, and are not interested in just handing everything over to some company who operates a "black box" -- regardless of who that company is.
Indeed, it's the root server operators that have really spared us from the worst of the damage that ICANN has tried to inflict upon us. For the stupidest things, the root server operators simply said "Not only NO, but HELL NO!", and ICANN was forced to back down.
Re:Why still running on BIND? (Score:2)
Well, if you've been working on your own, maybe you could release it as open source?
So, why does everyone continue to use it? Or better question, why hasn't someone written a better alternative?
Sounds like you're considering writing one. What's stopping you? :-)
Re:Why still running on BIND? (Score:2, Informative)
Lots of people have:
DJB DNS [cr.yp.to]
Custom DNS [sourceforge.net]
MaraDNS [maradns.org]
Posadis [sourceforge.net] (though I've no experience with it yet)
The list goes on and on.. hit Freshmeat.net [freshmeat.net] for some possibilties.
Re:Why still running on BIND? (Score:2, Interesting)
(As a side note, that feature is part of the reason I've been researching DNS. And yes, my personal project does include SQL backed lookups. No it is not ready for anyone to look at.)
Re:Why still running on BIND? (Score:2)
Re:Why still running on BIND? (Score:3, Interesting)
The official BIND 4.x and 8.x trees are full of bad code that will probably yield serious bugs down the road. OpenBSD's audited BIND is based on 4.x, and as such lacks some important features that you really need. Theo and the gang have said that they have no intention of auditing the 8.x code, because it's so sloppy, and because they don't feel it adds enough to the 4.x tree.
I'm paranoid about the MySQL enabled version of BIND, because I'm not confident that it would be patched quickly in the event that an exploit was discovered.
I've been meaning to try to tie djbdns into a web application that could modify zone data -- the command line programs seem like they could be called easily by php, although I haven't actually installed it yet, so I could be all wet. If you don't do many modifications, it might be a reasonable compromise between a full blown SQL system and something more lightweight and secure.
A fully featured SQL database could probably keep itself in sync with djbdns with triggers, so long as you weren't pounding on the data and changing it frequently. That would eliminate SQL bottlenecks on the serving side, although I have no idea how it would stand up under heavy zone editing.
I wish the government would throw a couple of million bucks at this problem, and open source all the results. It would make the world a much better place.
Re:Why still running on BIND? (Score:2)
Like Dynamic Updates? (See RFC2136 - http://www.rfc-editor.org/cgi-bin/rfcdoctype.pl?l
I'm using dynamic updates with Bind 9.1. Works great.
Re:Why still running on BIND? (Score:3, Informative)
What on earth for? SQL is a general purpose query language designed to maximize flexibility over performance. SQL lets you do all sorts of complex nested subqueries and joins which simply aren't needed for DNS, so why have the overhead? It all comes down to using the right tool for the job. And in this case, a fast non-SQL database (such as Berkeley DB, for example) is far more suited to the job. Too many people equate the term "database" with "SQL", when that's just one of the options. Often it's the right choice, but sometimes it isn't, and this is one of those times.
Re:Why still running on BIND? (Score:2)
Because it makes maintanance and understanding what is going on simple.
I would like to see a nice SQL backed system.
Most of SQL is utterly irrelevent to DNS managemnt. Effectivly you'd end up with an inefficent heavyweight system.
Re:Why still running on BIND? (Score:3, Interesting)
I've had to write a program which validates that all our hosts are still properly resolving, as every few weeks it drops a bunch of records. The adminstrators of the DNS have no idea why it happens, all they can do is manually re-add the addresses through the GUI system.
Re:Why still running on BIND? (Score:2)
I've actually tried loading up BIND with hundreds of thousands of domains and records. If you have enough RAM, it doesn't take a mighty machine to answer large amounts of requests. That is not what is broken with it. At present, the strategy is to refresh the information in memory on request by an administrator. Otherwise, don't check, serve requests. This is as fast and efficient as you can get. SQL would break that.
And it only takes a couple of hours to write perl code that extracts info from a SQL database, dumps it into flat files, and kicks the name server. It's a common naive mistake to think that putting everything into a database is a good idea.
Re:Why still running on BIND? (Score:3, Informative)
Indeed, if you want to support TCP under tinydns, you have to configure an optional program called "axfrdns", which was intended to handle zone transfers, but also happens to share the same database as tinydns, and can handle generic TCP queries.
The problem is that if you make a mistake and munge the database and then rsync or rcp that to the backup servers, you're totally hosed. Contrariwise, if you use the standard zone transfer mechanism, then the zone transfer should fail if the master is munged, and the slaves should keep a good working copy for a while and give you time to notice that the master is munged and needs to be fixed.
While this is not the recommended mode of configuration, some sites don't have the luxury of having separate authoritative-only and caching/recursive-only server(s), and need to mix them both on one machine (or set of machines). With the BIND 9 "view" mechanism, this is relatively easy to do. With djbdns, this is impossible.
Unfortunately, as time goes on and more and more people are doing things like IPv6, VPNs based on IPSec, or people just care about being able to cryptographically prove that their servers are handing out the only correct information and that the clients are able to cryptographically verify this fact (think: electronic banking), these kinds of features are going to become ever more commonplace.
Note that, with the advent of BIND 9, you can create a caching-only server that will validate cryptographically signed records, and all clients can benefit even if they do not themselves implement any of the new DNSSEC features.
So, let's look at some of these bugs:
Benchmarks published by Rick Jones [hp.com] have clearly shown that BIND can scale up to at least 12,000 DNS queries per second, and there is every indication that BIND 9.2 will be able to go considerably higher. The best benchmarks available for tinydns indicate that it can handle at least 500 queries per second, but that is the highest number reported. Other people on the bind-users [isc.org] mailing list have indicated that they have performed their own (as yet unpublished) benchmarks of tinydns, and that it had notable performance problems that BIND did not suffer.
The best published benchmarks from the author for dnscache report a query handling rate of less than one million records over a 4.5 hour period of time, which works out to an average of less than sixty-two queries per second. Even if you look at numbers of queries per CPU second, the best numbers they can provide are 13.7 million queries over a four week period of time with 128 minutes of CPU time used (an average of slightly less than 1784 queries per CPU second).
Compare this with the requirement from RFC 2010 "Operational Criteriafor Root Name Servers" [faqs.org] (since obsoleted by RFC 2870 "Root Name Server Operational Requirements" [faqs.org]) is that the machine and software in question be able to handle at least 2000 queries per second, and be scalable to levels higher than that. Indeed, recent reports have indicated that a.root-servers.net (considered by many to be the "primary" root nameserver) is currently handling around 12,000 DNS queries per second at peak.
Preliminary benchmarks published on the bind-users mailing list have indicated that, on the same hardware, there is little or no performance benefit to using dnscache as opposed to BIND 9.1.2, and when these tests are re-run with BIND 9.2, I'm sure that it will come out even faster.
For example, he makes a big point of tinydns being better than BIND, because while the process is starting up, it still answers queries. While previous versions of BIND would not answer queries during startup, this is no longer a problem with BIND 9.
Dan also makes a great deal of the fact that the djbdns tools run as a user other than root, and in chroot() environments. While the "monolithic setuid root" situation was an issue with older versions of BIND, even more recent releases of BIND 8 could be easily run as a non-priviledged user in a chroot() environment, and this is the preferred method of running BIND 9.
Contrariwise, one of the legitimate big complaints about older versions of BIND is that they implemented zone transfers in a separate program. If the database was large, then the fork()/exec() overhead was large, and the system could seriously thrash itself to death as it copied all those pages (for systems without copy-on-write), only to immediately throw them away again when it fired up the named-xfer program. With BIND 9, this problem is solved by having a separate thread inside the server handling zone transfers, and no fork()/exec() is done. However, tinydns/axfrdns goes back to the fork()/exec() model that was so greatly despised.
Suffice it to say that there is absolutely nothing that djbdns does that I believe can't be done at least as well (or considerably better) with BIND, and there are no security benefits it provides that cannot be provided at least as well (or much better) by a proper installation of a modern version of BIND.
I believe in the "security through diversity" scheme as much as anyone, but I'd take root nameservers running a program written in Bourne shell over djbdns. Hell, I'd rather fall back to using HOSTS.TXT than use djbdns.
Unfortunately, the other alternative of DENTS [dents.org] is also unsuitable for use as a production nameserver.
Show me something that is sufficiently better than BIND (and open source), and I'm sure that everyone will quickly gravitate towards it. Until then, BIND is the best we've got.
Re:Why still running on BIND? (Score:2, Informative)
By default, tinydns does not hand out referrals to questions it is asked about zones it does not control. I believe that this violates the spirt of the RFCs, if not the letter.
Please indicate where do you think that this breaks the RFCs.
By default, tinydns does not support the use of TCP at all. This most definitely violates the spirt of the RFCs, as well as the letter (if a DNS query via UDP results in truncation, you're supposed to re-do the query using TCP instead).
Truncation only happens when the reply doesn't fit in 512 bytes. As the administrator of a DNS server, you're in control of the data you publish. If you want to do stupid and publish data that doesn't fit in 512 bytes, you have the possibility to do so. It's just FUD to say that djbdns doesn't support TCP; the nice thing is that if you don't need TCP service, you don't even have to install it.The problem is that if you make a mistake and munge the database and then rsync or rcp that to the backup servers, you're totally hosed.
This is entirely untrue, and show that you've never even used djbdns. If you make a mistake in your data, you'll get a nice notification of that fact and nothing will stop working, including slave DNS servers. When you correct your mistake, the new data is instantly used and replicated to your slaves. Unlike BIND, which won't load the zone when it has errors in it. This means that BIND will stop publishing data about that zone.
Without a patch from a third party, tinydns does not listen to more than one IP address.
Or you could just run multiple instances of tinydns. Which costs almost nothing, since multiple tinydns'es can use the same data.cdb file. And even several tinydns processes running consume much less resources than even 1 BIND.
Without a third party patch, tinydns does not support standard SRV records
Entirely untrue. tinydns supports all (even not-yet-defined!) types of records. Unlike BIND, which barfs when an unknown record type is fed to it via a zone transfer.
reality he is the one that gets to define what he accepts as a "bug", and has repeatedly demonstrated a tendancy to openly refuse
When a dispute about the bounty (which is for security holes, not bugs in general) occurs, it will be reported on his website. Since there isn't any dispute mentioned, you didn't even try to report one, did you?
Lots of people are running one or more programs of the djbdns suite, and are really happy with them. If you want to use another program, that's perfectly fine of course. It's not fine when you start talking non-sense about a product that you've never even used.
Re:Why still running on BIND? (Score:2, Informative)
See here [cr.yp.to]. kthx.
register.com's nameservers (Score:4, Interesting)
Is there anyone here knowledgeable about this who can comment on a few things?
I'd love to see (more closely) another implementation of the DNS system other than the 3 or so commonly found.
Re:register.com's nameservers (Score:3, Interesting)
Just my $.02,
davidu
Re:EveryDNS.Net looks great - mod parent up! (Score:2)
Actually, David Weekly and the California Community Colocation Project is hosting one of my new servers [everydns.net] at their space in HurricaneElectric's colo. So at the moment I'm good but it's always nice to get a sponsor especially at the rate I'm growing.
thanks for caring,
-davidu
Re:register.com's nameservers (Score:3, Interesting)
I would probably look. I've leafed through the source to BIND to see how they do certain things. I doubt I would commit to fixing X bugs, but if I found any, I would submit patches. I have in the past to other people's work.
Does it really matter one way or another, or are you just throwing out buzzwords with SQL and backend?
Any chance of licensing it out even without the source?
yeah, it would be neat if it would. I've been looking for a good nameserver that holds its zone information in my database. I haven't found a good one for the hooks in BIND 9 yet, and I would be willing to ditch it if I found a descent replacement. I am also curious why they wrote their own, and I can see this being one of the reasons.
Since it's going to be used on root servers, I hope not.
I didn't realize they ran it any of the root servers. I just figured they ran it on some of the authoritative servers that they run. Those are the same servers that I update through their web pages rather regularly. Actually, I didn't even realize that register.com ran *any* of the root servers.Which ones do they run?
No need to comment on the last two.
Yeah, trolls aren't known for their stamina.
Homogeniety is bad (Score:2, Insightful)
We need another DNS server that has the (relative) standard compliance and scalability so that we could have some other server software running on some of the root servers. Unfortunately, all of the alternatives I know of don't scale to that volume of transactions, aren't nearly as proven as BIND, and many of them have standards compliance issues worse than BIND.
Re:Homogeniety is bad (Score:2, Interesting)
Re:Homogeniety is bad (Score:3, Insightful)
DNS? Ha! (Score:5, Funny)
Re:DNS? Ha! (Score:2, Funny)
Real men don't need no wussy ethernet cards either. A voltmeter and battery is all you need.
Re:DNS? Ha! (Score:2)
Re:DNS? Ha! (Score:2)
Of course they use base 10, just without this wimpy business of splitting addresses into four octets. Real men have no need for such things: http://3277650428 [3277650428]
Re:DNS? Ha! (Score:2, Funny)
Re:DNS? Ha! (Score:2)
DNS? We don't need no steenkin' DNS!
djbdns and opennic (Score:5, Interesting)
Also OpenNIC [unrated.net] is an ICANN indepent root system ... why not just use them instead of ICANN?
Re:djbdns and opennic (Score:2)
Re:djbdns and opennic (Score:2)
BBK
Re:djbdns and opennic (Score:3, Interesting)
DDOS network (Score:2, Informative)
To: bugtraq@securityfocus.com
Subject: Fwd: Possible DDOS network being built through ssh1 crc compromised hosts
I am making this notification to assist in determining whether other
folks have been affected by this attack.
An associate's home NAT gateway linux box was hacked by what I am
guessing was the ssh1 crc bug (ssh1 was the only exposed service).
This
machine looks to have been compromised on Nov 2nd at 1:15pm PST, I
won't know for certain until I obtain his hard disk later today, and
provided that
redhat 6.2, reasonably patched except for the fact that he was still
running ssh1.
It appears that someone may be building up a network of (potentially)
DDOS hosts. I have done some quick research and found no matches for
the signatures I have been able to identify so far.
Using the Chkrootkit (www.chkrootkit.org) utilities did not identify
a known trojan pack, so if this isn't identified in the wild, I'm
already referring to it as the LIMPninja.
It also appears that this particular host was used as a central host
for other LIMPninja zombies. Also, haven't been able to determine
what the command structure it is that the remote bots act upon.
The following is by no means complete, even after a full examination
of the drive has been completed, as there was never any file
integrity base line completed(a shame).
The attack appears to be scripted as all changes happened within a
minute, except for the IRC server which was not installed until 2
days later (and manually). When I found this particular irc net
there were over 120 hosts all communicating via IRC. This host was
found to be running an unrealircd daemon from
listening at port 6669.
All other compromised hosts were joining this irc network
(ircd.hola.mx holad) on channel #kujikiri with a channel key of
'ninehandscutting'. All bots joined as the nick ninjaXXXX where XXXX
is some RANDOM? selection of 4 upper case letters.
Several ports were listening
3879 term (this port had an ipchains rule blocking all external
traffic - placed by the attacker's script)
6669 ircd
9706 term
42121 inetd spawned in.telnetd
Logs were wiped, and couldn't find a wiping utility so I'm thinking a
simple rm or unlink was used, so I'm hoping to find more details when
the disk is in hand. File modifications that were made follow:(not
necessarily a complete analysis yet)
clearly Trojaned binaries (probably others)
/bin/ps
/bin/netstat
/bin/ls (this ls binary was hiding several things, directory
structures named
/usr/local/bin/sshd1 (the file was just several hundred bytes larger
than previously)
Binary file/directory additions
/usr/bin/bin/u/ An entire directory structure containing the ircd
server source
/usr/bin/share/mysqld (looks like some type of irc spoofing proxy)
/bin/klogd (almost looks like an ftp proxy)
/bin/term (A bindshell of some sort)
/usr/sbin/init.d was added and is exactly the same file size as term
System configuration files that were modified/added
/etc/hosts.allow made specific allowances for the
as
/etc/passwd two new accounts were added with the same password (des
hashes -NOT MD5)
/etc/shadow The added accounts were lpd 1212:1212, and admin 0:0
/etc/inetd.conf 200+ lines of whitespace added, and then the single
telnet entry
/etc/services was modified for telnet to start on port 42121
/etc/resolv.conf a new nameserver was added...
/etc/psdevtab haven't examined closely yet
/etc/rc.sysinit a line was added to start the
trojan/backdoor
/etc/rc.local after much whitespace was added.... following lines at
the bottom of the rc.local file
killall -9 rpc.statd
killall -9 gdm
killall -9 gpm
killall -9 lpd
term
klogd
"/usr/bin/share/mysqld"
-----
This should assist other ppl who have had similar attacks...
Re:DDOS network (Score:2)
General Problems (Score:2, Insightful)
Then again, maybe I don't notice the times it DOES work like it's supposed to.
Re:General Problems (Score:2)
Also many people (including Microsoft) simply know that they should have more than one authoritive server. But they don't understand the issues of redundancy, so they site them all together. In many cases you will still find places with every nameserver on the same IP network.
A killer app for MULTICAST? (Score:2, Interesting)
The main problem is that all the second-level servers have fixed pointers (usually hard-coded, I believe, in text files) to the root servers.
Assuming some form of robust authentication could be worked out, this could be a killer app for IP multicast, where, if a root server goes down, once the replacement comes back up, the IP of that server gets instantly disseminated to all secondary level (or maybe even even futher down) nameservers around the world rather than manual notification (or however it works now), so that downtime would be minimal.
Sound viable?
Re:A killer app for MULTICAST? (Score:2)
No, because that isn't how it works.
First level and most second level nameservers don't do recursive queries: you can't ask them about anything not in their zones. You can't ask a second level DNS for the IP of a first level DNS (and it doesn't need to know that).
Almost all DNS servers that do support recursive queries (e.g. the one your ISP lets you use) have a database with the IPs of the rootservers. Most DNS servers people run at their homes have that database (I have).
You'd be multicasting those IPs to a whole lot more machines than just the second level servers, which don't need them anyway.
This doesn't mean it won't work: most routers on the net already are connected to a multicast net. It could work that way for DNS servers too.
No problem! (Score:2, Funny)
DNS in inherently flawed... (Score:4, Insightful)
I've given this a little thought over the years. There's a few fundamental issues with the centralized DNS system.
I've tried kicking around a few replacements ideas, like a peer-to-peer exchange system carrying certificates that act a little like resource search records.
The FreeNet project actually gives a good model for how to distribute and search for these 'domain certificates'.
I'd like to see a system that you essentially 'anonymously' submit namespace entries to. Conflicts are resolved based on context. If a dozen people want "money.domain", fine. If you try to browse to it without any context, you have to choose which one you want based on other information in the certificates (full name, location, nickname etc) and once you've chosen, that context sticks. URL's would need to be extended to also carry this context, which probably need to be a cryptographic signature to prevent abuse.
It constantly amazes me that people are willing to pay $50 to 'own' a record in a database. The domain land grab was just stupid... in virtual space, you can always just make more land. As
DNS will obviously persist for decades, (simply because of the financial and general mindspace investment in 'dots') but hopefully as only one of a plethora of address resolution systems. Name resolution needs to be a pool, not a tree.
"For as long as the DNS system exists, the Internet will never be free" - Morpheus, while very Drunk
Re:DNS in inherently flawed... (Score:3, Insightful)
Re:DNS in inherently flawed... (Score:2, Interesting)
I suggested the FreeNet system as a good conceptual base because of one P2P property which would be beneficial... the more a file is used, the wider it's replicated.
DNS has a big advantage over other P2P systems that the 'files' it's trading are very small. As people have been mentioning, it's possible to download the whole DNS tree to a beefy laptop, uncompressed.
Yes, if it's a really uncommon site that no-one has ever been to before, then the initial discovery might take whole minutes. Woo.
DNS is slowly being broken by commercial interests. Eveyone knows it. Anything this vital to the internet is worth big money, and if it's centralized, that invariably leads to a power elite, which eventually takes the path of self-interest...
To make a highly emotional analogy, the current DNS system is like an RIAA or MPAA in it's infancy. We now have the chance to turn off from that branch of time, that terrible future history, where it's illegal to host nonauthoritive records and Seattle has been nuked by the nameless.
Re:DNS in inherently flawed... (Score:2)
If you for some reason overwrite an actual DNS entry, by calling eBay www.yahoo.com, then the difference is made by just using the http:// header, or, in the case of this app, dns://www.yahoo.com. This way, users can organize the Internet the way it works for them. If they want to send URLs to someone, it's handled as a separate sort of context, and basically the inserted URL gets the actual DNS name reversed into it, so if I send you an AIM message linking you to Troll Land, it shows up as http://slashdot.org.
Re:DNS in inherently flawed... (Score:2)
Interesting idea though.
Re:DNS in inherently flawed... (Score:2, Informative)
http://nms.lcs.mit.edu/papers/dns-imw2001.html [mit.edu]
-Patrick
Re:DNS in inherently flawed... (Score:2)
The implication is that either the root servers account for all of the speed in the DNS system, and the caching doesn't make too much of a difference, or the implication is that a freenet/gnutella/whatever style distributed system would work just fine. I'm having trouble with the last sentence of the abstract: What's a "dynamic, low-TTL A-record binding"?
Re:DNS in inherently flawed... (Score:3, Insightful)
.info and
Web addresses should be memorable names. "yahoo" is easier to remember than "www.yahoo.org". And with www.*.com names, all people do remember is is "yahoo". The rest quickly becomes standard.
For humans, "yahoo", "cnet" and "amazon" are all top-level domains.
Instead of creating new tlds that are mostly duplicates of existing tlds, we should be restricting domain ownership, so no legal person can own more than one domain. That should prevent people and companies from spamming DNS, so that good names remain available.
Re:DNS in inherently flawed... (Score:2)
I think not. A friend of mine, age 18, runs a e-mail based contest site (20,000+ subscribers), his dad's law office site, and a general web production company. To demand that his contest or his web production sites be relegated to a lengthy URL is plain foolish, and no one will ever agree to it. Ever. Should Microsoft combine msn.com, microsoft.com and hotmail.com into one domain? VA Linux to combine Slashdot, valinux.com, sourceforge, newsforge and AnimeFu? (or whatever, I forget who owns what) Of course not! That would be a hassle to users and admin alike. A just plain silly notion, unrealistic and noxious to everyone involved with the Internet.
Re:DNS in inherently flawed... (Score:2)
Re:DNS in inherently flawed... (Score:2)
Rather than a hierarchical naming system where you end up with names like joes-bakery.com, a particular Joe's Bakery would register with its address, company name, products, services, trademarks, etc. The first time you want to find Joe's Bakery, you need to provide enough information to identify the business. After that, the IP address can be locally cached with the registry information and "joes" could be a shortcut if it is a unique match.
If you want the same shortcuts for a group of people, you should be able to have a secondary cache on a common server.
There does not need to be a single global registry. There could be different registries with different performance characteristics and different accuracy of information. This way anybody can register anybody, and the market and personal preferences can decide rather than ICANN selling monopolies.
I'm with you on that (Score:2)
The biggest hurtle to implementing such a system is the learning curve for the cryptographic APIs for the languages I'd want to use. There is not a wealth of information on such APIs to begin with. The next biggest hurtle, of course, being that if it were developed inside the US, it'd probably be considered an act of terrorism to ship it outside the US.
Re:DNS in inherently flawed... (Score:2)
Access-lists and firewalls (Score:2)
Starting to back it up. (Score:5, Funny)
I am downloading as we speak all the DNS records in the planet into my
I encourage others to do the same.
No need for that... (Score:2)
WARNING! (Score:2)
It has to be said (Score:2, Funny)
Ok, how long before someone at ICANN suggests that the servers should maintain domain to ip mappings in static files. Something like a file called hosts and that could be stored in
Sorry, I'm just in a sarcastic mood given the fact that they actually use bind. Does anyone find that a little scary?
I know it's been brought up here on
What? The root NSes run BIND? (Score:2)
That's news to me. I always thought Network Solutions or whoever runs the other root name servers had their own proprietary and more robust and scalable DNS software.
Search Engine DNS? (Score:2, Interesting)
Not a complete solution, but it would be enough to keep the net going if DNS went down.
ICANN does something useful (Score:3, Interesting)
Is it my imagination or is ICANN actually working on getting their job done rather than horribly complex politics (more complex than needed to solve the problem), or trademark/legal craziness? There's some background at the page of the ICANN DNS Root committee [icann.org].
Now, I'm pretty skeptical that a closed source DNS server from Register.com is going to be a big part of the solution, but even that I don't really mind so much. Having a few alternatives is good if for no other reason than helping to keep BIND from stagnating.
The article didn't talk much about DNSsec [nlnetlabs.nl] (or this older page [toad.com]) which has got to be part of the solution (to try to give the 10 second summary, when a client makes a DNS query and gets a response, it is kind of tricky to ensure that the response is really from the correct server, and DNSsec uses crypto to solve this and other problems).
9 days of DNS hell (Score:2, Interesting)
You know what to do... (Score:2)
$ ping www.slashdot.org
PING slashdot.org (64.28.67.150): 56 data bytes
That's 64.28.67.150!! Start memorizing now!!!
No wonder I couldn't reach slashdot! (Score:2)
Seriously, though, that works well when you've got one box sitting out there, but a lot of services install round robin DNS with multiple servers for load balancing. Try "dig yahoo.com" or lycos or google, for example. Socks3 here at work consists of about 9 servers, only three of which seem to work with any reliability.
Want to solve all the BIND security problems??? (Score:5, Funny)
Change the BIND license to make it much more restrictive, then sit back as the OpenBSD developers build their own simpler, better, more stable, and much more secure, replacement.
SSH.
IPF.
BIND?
Obviously just another chokehold (Score:2, Interesting)
This is the difference between hackers and bureaucrats in a nutshell. Centralized control over resiliant sophistication. God damn each of those bimbo sellout engineers for their short sightedness. If I had one ounce of say, one chance at effecting or affecting the logical and liberty enhancing solution I mention above, I could consider my life more or less comlete. (And I'm a card carrying member of ICANN at-large, dammit, and so much closer to such a goal than the bulk of you all!!) This is surely going to be the doom of the net as we know it.
How long before the governing body (ICANN) of such a rigid and authoritarian system becomes a mere appendage of one of the big players (IBM, AOL, MSFT)? That ICANN is already rotten with corruption is apparent to almost everyone, but what I am asking is how long before even the lip service is discarded? I am aghast at the thought of a monopoly on basic existance that such moves as this do threaten.
This is a call to arms. Anyone involve with open DNS or PtP should reply to this thread or email me at: this adress [mailto] to discuss superceding such insidious and freedom wrecking evil as presented in the parent story.
Thankyou.
Security, reliability and the like (Score:4, Interesting)
The actual root servers are only queried for the top-level domains and while they have rather massive databases, the types of queries they get is limited.
Now, I'm going to assume that given all the money collected for domains, there somewhere exists a nice pot of money available for running root DNS servers. If there isn't then something is seriously wrong with the administration of DNS.
Segmentation of the actual root servers from the world by utilizing a front-end dns cache that would rewrite the actual DNS queries would solve a lot of problems.
First, rewriting queries would allow an amazing amount of sanity checking to be done on the query itself and should prevent exploiting the back-end root servers directly.
Second, as front-end dns caches can be extremely simple and require almost no configuration, the OS installation can be absolutely minimal excluding even shells. You could go as far as to use an OS that allowed you to revoke system privledges such as certain syscalls (fork, exec, open, etc aren't all that necessary once everything is running) and even make the caching DNS server run as init (though you must have something to bring up networking interfaces.)
Physical segmentation is obviously important as well so a private backbone strung between all core root servers and a seperate interface on each front end cache to access them would help quite a bit.
Of course then comes the issue of DoS attacks which again should be rather easy to solve considering what we are talking about. Just buy a lot of front-end cache systems. You would think given how important root servers are and how much money domain revenues generate, buying a thousand or even ten thousand machines and sticking them in every major network access point wouldn't be all that big of a deal.
Now you still have to deal with the fact that most DNS servers still have a static list of root server IPs. Thankfully, the simple DNS queries that hit root servers can be done with a single UDP packet request and response (until you have to work up the hierarchy) making them prime targets for one of the many clustering solutions out there from simple IP sharing virtual servers to routing protocol tricks.
Of course, I may be oversimplifying the problem.
Re:Security, reliability and the like (Score:2, Insightful)
I'm not sure if most people posting to this and other articles understands why dns is the way it is.
The whole businsess about the "security" flaws are two fold:
1. people don't patch their servers because they don't stay on top of things.
2. most dns servers are not locked down properly (especially those of you using at&t's, worldcom's and other large telco's dns') against zone transfers which allow hackers to find out what you've got.
DNS is a distributed database with a small lookup latency - this is very different than oracle, ldap and other structures. DNS is redundant and is designed to have broken branches (goes back to America's cold war days - even though bind is not that old!). The network, the data, and redundancy IS segment - have you every noticed that the root servers never came down - even for a massive virus - most dns outages come from your local ISP's caching dns, which could be running and old version of bind (single threaded mess).
Don't rely on the security of DNS (Score:2)
In additions, occasionally, DNS database entries are wrong (although the servers are operating correctly), due to maintenance errors or social engineering attacks. Security on the root servers or even DNSSEC does not address this problem at all.
So the best solution is not to base any authentication on DNS names at all. (Then there's hardly any need for DNSSEC either.) Of course, quite a few Internet users rely heavily on the non-existent DNS security. They fetch mail using unencrypted POP3, use HTTP-based mail solutions, and so on, and if someone is able to redirect their connections as a consequence of DNS spoofing, he can obtain their passwords pretty easily. But reasonable secure solutions (e.g. TLS and server certificates) already exist.
/etc/hosts!!! (Score:4, Funny)
Then, subscribe to a mailing list that sends daily changes, so that you can keep your
Ehm... yeah. You first have to secure mail to do this.
Re:/etc/hosts!!! (Score:2)
Well, it worked for FidoNet. The FidoNet nodelist was esentially a huge /etc/hosts file with compressed diffs sent out once a week. Fortunately FidoNet started to shrink (due to the rise of the TCP/IP internet) just as the nodelist was starting to get really unmanageable.
Re:/etc/hosts!!! (Score:2)
You may joke, but on a small scale, it works. I have all of our production servers set up to use local hosts files. They don't need to know about anything outside of our production network, which is small enough and static enough that we simply don't need DNS, so we don't use it. There is no DNS traffic on our production network, and we're not vulnerable to DNS security flaws. On the rare occasions when we need to make changes, a simple script copies the new hosts file to each server with scp.
Re:/etc/hosts!!! (Score:3, Funny)
Actually, using secure mail would get tiresome, don't you think? What is needed is a mail user agent that will simply take the incoming mail, run it as root, and modify/add whatever files neccesary without admin or user intervention. Now THAT would be a time-saver, huh?
pathetic article (Score:2)
He went on to argue that "most security holes are due to buggy software. All the cryptography in the world is not going to change the buggy software problem."
In my experience, most security holes are caused by careless or ignorant users. Even if you take all the bugs out of all the software, there are still going to be security holes. Its like the locked doors at work: secure entrances are pointless if you hold the door open for the guy behind you (and you don't know the guy behind you).
djbdns isn't really the answer (Score:2, Interesting)
the biggest problems with DNS on the internet have NOTHING to do with the software used. the protocol itself is quite insecure- and what's worse is that this isn't news!
one thing that certainly needs to change is this silly concept of recursive-resolvers; they change the responses, and thus it's next to impossible to determine which is the "Real" resolver.
thanks to sequence prediction, and because DNS servers/clients don't have any "other" protection, it's quite trivial to smash or alter someone's dns tables (during a zone transfer), or redirect users someplace else (when doing recursion).
what we need is a cryptographic method of "signing" requests. root nameservers should maintain keys in addition to NS rrs. And what bind calls "root hints" should contain the keys of the root nameservers. this way, we can digitally sign responses so that their authenticity can be verified. moreover, if packet-space is limited (and even though a "most" queries should have a hundread or three bytes free) we could always just store a hash of the signature. but that's getting too far into implimentation.
the basic droll is that we need something BETTER than dns... not just new software, but a new design...
and plus, by implimenting crypto into the name services, we'll be able to finally keep the french off the internet.
(for those of you lacking any kind of crypto-political background: the french aren't allowed ANY cryptography.... and you thought US export control was bad!)
Oh well (Score:2)
I like that they mentioned that diversity results in more security (at the very end of the article). This is one of the major problems with Microsoft products: they only make two operating systems, so when a bug is found in one or both of them, the whole world goes down from some email script virus that a child can write. Under the alternative Linux and BSDs, there are differences between the distributions and even between installations, resulting in big headaches for would-be virus writers. (Sure, this also results in headaches for developers, but who said that making software is easy? Yeah, developing is allegedly "easy" under Microsoft platforms, potentially saving your business big dollars in R&D, but that money gets thrown away on the inevitable repairs necessary after some k1ddy in Congo or something manages to deploy a virus.)
So, like the article says, diversity improves security. In my opinion, each site should choose the best system for the job and configure it to do that job well. If you end up with 10 different platforms and operating systems, so be it.
Oh well.
Oh yeah, so what I was trying to say is that not only the operating system, but also the software running on it, should be diverse and come from as many different sources as possible. I would even say that if you run several machines that perform the same job, perform that job with different software on the several machines. This way, when one gets cracked, the others continue to work (at least for a while).
Re:Target for terrorism (Score:2, Informative)
The Locations are:
Moffer Field CA
Woodside CA
Marina Del Rey CA (2)
Herdon VA (3)
College Park MD
Vienna VA
Aberdeen MD
London
Stockholm
Keio
Re:routing != DNS (Score:3, Informative)
Untrue, DNS is like www.whitehouse.gov under permanent attack. The article is based on a number of assumptions that are not true of all the root servers.
Steve Bellovin is somewhat inaccurate in his statement about BIND. While it is true that most of the Root servers run code that originated in BIND most of the heavy lifting is done by a few servers that sit on the fattest pipes that run a stripped code base. The code paths of that code base have at this point been as near to completely tested as anything gets.
The real problem is that most of the root servers are still maintained by the ad-hoc volunteer network of the 1980s Internet. As a result many of the 'root servers' are hosted on drinking straws rather than pipes.
There are 13 servers however and all have to go down to take out the Internet. Even then the effect would take some time to be felt. The root servers only manage the top level domains. These tend not to change very often and so the TTL on the root records can be made very long without causing operational difficulty.
A much more serious problem would be if someone brought up a fake root server. DNS does not provide authentication.
Rather than obsess about the code base problem ICANN should be either deploying BIND or telling the IETF the characteristics of the security protocol it really needs.
Re:routing != DNS (Score:2)
Bullshit. See RFC 2870 [isi.edu].
DNS does not provide authentication. [...] ICANN should be [...] telling the IETF the characteristics of the security protocol it really needs.
They are. It's called DNSSEC, and there's tremendous work and buzz going on about it throughout the IETF.
Re:routing != DNS (Score:2)
Re:routing != DNS (Score:2)
Look at the members of the ICANN board, look at the membership of the IESG and IAB over the past 10 years. Oh and look up Steve Bellovin's research interests.
I had meant to type 'DNSSEC' in the original in place of BIND. DNSSEC many not be the answer, if it was the answer maybe it would have taken less than ten years to deploy after the RFCs were written.
The main problem is that DNSSEC turned into a design for a general purpose PKI. As a result lots of features were added such as the NXT record that make deployment sticky. Also the nature of DNS lookups has changed drastically. TTLs are often measured in minutes rather than days as they once were. This means that the DNSSEC method of signing each RR individually is a very high overhead. The servers can do the siging offline, not so hot for the clients though which can have ten or more signatures to verify for a single lookup.
Re:routing != DNS (Score:2)
That is not what I said. I said that ICANN should specify the requirements they have for a DNS security infrastructure to secure the DNS and tell the IESG what they are.
The IETF has designed some good security protocols. It has also produced a lot of bad ones. They are not bad because they are insecure, they are bad because they don't meet the real requirements. PEM and MOSS being prime examples. IPSEC and DNSSEC both have clear usability problems that mean that they are not being used in practice for the purpose they were originally advertised as solving.
Reminds me of this "classic" prose... (Score:2, Interesting)
So who could do it? The IETF and the ACM come to mind. There are probably a few others.
Note that you don't have to switch all at once, you can still fall back to legacy ICANN domains if the new domain system doesn't find a match.
My "ultimate" domain name scheme would allow anything as a
Re:DNS Solution (Score:2)