Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
The Internet

Securing DNS From The Roots Up 354

jeffy124 writes: "This article at ComputerWorld tells the story of how ICANN would like to replace the root DNS systems with secured servers. Lars-Johan Liman, one of the root operators, spoke about the concept at ICANN's annual meeting today. He discussed how the world's current redundant DNS system is vulnerable to DDOS attacks and yet-to-be-discovered root holes in bind that can ultimately undermine the entire Internet by taking away the name-IP mappings that are relied upon by just about everyone."
This discussion has been archived. No new comments can be posted.

Securing DNS From The Roots Up

Comments Filter:
  • News flash! (Score:5, Funny)

    by Teancom ( 13486 ) <david@gnuc[ ]ulting.com ['ons' in gap]> on Wednesday November 14, 2001 @01:36AM (#2561779) Homepage
    Bind may be vulnerable to security exploits! Sendmail may *not* be as secure as qmail! Walking through harlem with $100 bills hanging out of your pockets isn't smart! Sky is blue!

    Some people just never get the news....
  • by Tsar ( 536185 ) on Wednesday November 14, 2001 @01:37AM (#2561786) Homepage Journal
    The Internet is depending on unsecured servers for DNS? Now how am I going to sleep at night? Next you'll be telling me the earth isn't sitting snugly atop a giant turtle! Is nothing certain any more?
  • by dave-fu ( 86011 ) on Wednesday November 14, 2001 @01:44AM (#2561800) Homepage Journal
    ...then malicious intruders will just go after the core routers, saturate lines, do things of that nature. Not that locking down DNS is a bad thing, but you can't defend everything all the time.
  • by kc8apf ( 89233 ) <kc8apf@[ ]apf.net ['kc8' in gap]> on Wednesday November 14, 2001 @01:44AM (#2561801) Homepage
    I have yet to find the great reason of why everyone uses BIND. I've been working on my own DNS server just for kicks. The protocol itself is trivial. It can be handled so easily, but yet, if you look at BIND's source code, you can't tell what is going on at all. So, why does everyone continue to use it? Or better question, why hasn't someone written a better alternative?
    • Some people are running djdns. It may be interesting for you to look at if you haven't already. There's also one from Microsoft.

      There probably aren't a lot developed, and you don't hear about many of them that are developed because it's not a very sexy application, like say, napster.
    • by fanatic ( 86657 ) on Wednesday November 14, 2001 @01:56AM (#2561834)
      Already available is djbdns [cr.yp.to], written by D. J. Bernstein with security as a design goal. In fact, he offers rewards [cr.yp.to] to anyone who can find a vulnerability.
      • by DaSyonic ( 238637 ) <DaSyonic@ya[ ].com ['hoo' in gap]> on Wednesday November 14, 2001 @02:26AM (#2561933) Homepage
        djbdns, and other stuff written by him, (including qmail) is all under a restrictive license. He essentially prevents any vendor/distribution to release it, as any vendor would need to make minor changes, but a vendor can't even change the pathnames to certain files... that's not acceptable.

        Read his license [cr.yp.to] and see for yourself.

        • It seems like its think inside the box day (why is it I've said this for the first two times of my life today?)...

          "You can distribute your patches for other people to use."
          "You may distribute a precompiled package if installing your package produces exactly the same files, in exactly the same locations, that a user would obtain by installing one of my packages listed above"
          "You may distribute exact copies of any of the following packages"

          It seems DJB is A-OK with the raw source or raw binaries being on the CD. He's also OK with patches and patch distribution.

          So here's how to "get around" his license if you are a distributor (which isn't all that bad except for distributors, who generally have better things to deal with than "attitude" -- which it appears DJB has against software distributors, and is why his software is doomed to fail in the market, even if it won't fail in the computer):

          - djbdns
          [ ] Install virgin djbdns binary
          [ ] Install virgin djbdns source
          - patches
          [ ] Install binary patch for djbdns for correct operation with your platform
          [ ] Install source patch for djbdns for correct operation with your platform

          Problem solved. He's happy, you're happy, I'm happy. We're all a happy family. Well, maybe DJB isn't happy, but then again maybe he should get back to coding more good software rather than writing software licenses (which it appears he isn't particularly good at... I'm no legal expert and I saw that gaping hole a mile away).

          Can't we just all get along? :-) Other than his dislike for distributors (does he roll his own?) DJB seems like a cool guy.

          [IANAL. If you wanted legal advice, you looked at the wrong post.]
          • Can't we just all get along? :-) Other than his dislike for distributors (does he roll his own?) DJB seems like a cool guy.

            I thought so too once but then I watched him bang his ahead against the wall on one of the OpenBSD mailing lists.

            Repeatedly.

            Someone who hits their head that often surely has a few problems...
        • by xjosh ( 181149 )
          djbdns (and qmail, etc) are NOT under a restrictive license as you like to argue. In fact, they are under no license. DJB simply doesn't believe that software licenses are valid, so he doesn't grant one. His "license" page that you refer to simply reiterates his right of first distribution, as well as waiving some of his right to first distribution under certain circumstances. Read this [cr.yp.to] for more background on why DJB doesn't issue licenses.

          The only appearance of the word license on that page is in a quote from a RedHat employee, not DJB. It would seem impossible to me to grant a license without specifically stating that you are granting a license.

          The inability to change pathnames is a bunch of hooey. I've seen packages included with a major distribution that could have been modified to use paths that make more sense, but have been packaged with the author's defaults instead.

          xjosh
      • by dido ( 9125 ) <dido@imper[ ].ph ['ium' in gap]> on Wednesday November 14, 2001 @02:52AM (#2561986)

        You can't do zone transfers using djbdns for one thing. DJB thinks that zone transfers are evil, and has his own method for doing the task (rsync over ssh I believe), but whether they're evil or not is beside the point. Like it or not, zone transfers are a part of the core DNS protocols and any proper successor to BIND must implement them all. Starting a standards war with the IETF is not something I want to have along with a name server I deploy. Let Bernstein write an RFC for publication describing his idiosyncratic methods and get the IETF to ratify it as a core standard if he wants, if he truly thinks his way is the better way. The way he operates reminds me more of the way Microsoft handles standards than anything else.

        Besides, djbdns is also deficient in a far more important way (for me and to a lot of people here on Slashdot anyhow, I hope): it's actually proprietary software with a limited license for gratis use. It's not Free Software or even Open Source, not by any reasonable definition of the term. There is no license along with his programs, and absent a license you have NO RIGHT to share, study, or change Bernstein's code!

        • check here:
          http://cr.yp.to/djbdns/axfrdns.html

          this supports outgoing transfers. Incoming are a possible security risk (NO authentication happens in most cases, other than IP address checking, IIRC), making this a prudent decision, IETF or no.

          BBK
        • You can't do zone transfers using djbdns for one thing. DJB thinks that zone transfers are evil, and has his own method for doing the task (rsync over ssh I believe)...

          The point he's trying to make there is that there is already an easy, secure way to transfer data over the Internet. Rather than inventing a new protocol (which may have bugs), why not just use existing tools that are probably already on your system. (I use rsync and ssh all the time for other tasks.)

          The "DJB Way" is to make small modular programs, which is very much in the original Unix tradition. The power of this comes in the ability to quickly create new applications out of exisiting components.

        • Not a fair criticism, since the standard was written to BIND's implementation, not the other way around. BIND's zone file format is in the RFCs, as part of the standard. Any DNS server that uses a SQL backend is non-compliant on this count alone.

          But even BIND deviates from the standard written to it's implementation in many places. See http://cr.yp.to/djbdns/notes.html [cr.yp.to] for some examples.

          There is no license along with his programs, and absent a license you have NO RIGHT to share, study, or change Bernstein's code!

          You didn't supply a license with your post, so may I presume your attorneys will be contacting me for quoting part of your silly diatribe?
      • Already available is djbdns [cr.yp.to], written by D. J. Bernstein with security as a design goal.

        Except that djbdns is no way a drop in replacement for BIND.
    • I'd assumed that by now, major DNS servers were implemented as front ends to some real database, just to simplify updating.
    • Post your code. Let people rip it apart. BIND has gotten where it is today because it is still the best open-source solution for the job.

      Yes, there are independantly coded closed-source solutions which perform better and presumably are better all around (Nominum [nominum.com] has written a program that they use as the basis for their Global Name Service, which does not contain any code from BIND).

      However, these are closed-source implementations, and the folks who operate the various root servers are doing so on a volunteer basis, and are not interested in just handing everything over to some company who operates a "black box" -- regardless of who that company is.

      Indeed, it's the root server operators that have really spared us from the worst of the damage that ICANN has tried to inflict upon us. For the stupidest things, the root server operators simply said "Not only NO, but HELL NO!", and ICANN was forced to back down.

    • I have yet to find the great reason of why everyone uses BIND. I've been working on my own DNS server just for kicks. The protocol itself is trivial. It can be handled so easily, but yet, if you look at BIND's source code, you can't tell what is going on at all.

      Well, if you've been working on your own, maybe you could release it as open source?

      So, why does everyone continue to use it? Or better question, why hasn't someone written a better alternative?

      Sounds like you're considering writing one. What's stopping you? :-)

    • why hasn't someone written a better alternative?

      Lots of people have:

      DJB DNS [cr.yp.to]

      Custom DNS [sourceforge.net]

      MaraDNS [maradns.org]

      Posadis [sourceforge.net] (though I've no experience with it yet)

      The list goes on and on.. hit Freshmeat.net [freshmeat.net] for some possibilties.

  • by po_boy ( 69692 ) on Wednesday November 14, 2001 @01:44AM (#2561802) Homepage
    from the article:

    Vendors at the conference offered their own security solutions. Register.com Inc. in New York, for example, has created its own propriety DNS software. The company continues to deploy BIND as well as its own software because diversity improves security, said Jordyn Buchanan, who worked on the team that developed the system.

    Is there anyone here knowledgeable about this who can comment on a few things?
    • Can I get the source to that in any way?
    • Does it use a SQL database backend?
    • Any chance of licensing it out even without the source?
    • Does it support dynamic updates?
    • Anything else cool about it?
    • Are you hiring?

    I'd love to see (more closely) another implementation of the DNS system other than the 3 or so commonly found.

    • I can't comment on register.com but at
      EveryDNS.Net [everydns.net] we found bind to be too much of a risk to run for our servers. In the long run, DJBDNS [cr.yp.to] has proven to not only be secure but also far easier to setup, administer as well as write parsers for.
      Just my $.02,
      davidu
  • Homogeniety is bad (Score:2, Insightful)

    by Anonymous Coward
    Does it strike anyone else as a bad thing that all of the root nameservers, and for that matter almost all important nameservers, run BIND? Ergo, a serious security bug can be used to take out all of the root nameservers.

    We need another DNS server that has the (relative) standard compliance and scalability so that we could have some other server software running on some of the root servers. Unfortunately, all of the alternatives I know of don't scale to that volume of transactions, aren't nearly as proven as BIND, and many of them have standards compliance issues worse than BIND.
    • by Jeffv323 ( 317436 )
      Traditionally, domain hijackings happen when attackers block access to a legitimate DNS server and replace it with their own. This DNS incident was different because this was a data attack rather than a hardware attack. By altering data in key DNS tables users were redirected just as successfully as implementing a rogue DNS server.
    • I understand that at least one of the root servers is running an alternative DNS implementation produced as commercial licensed software by Nominum (who also produced and maintain the Bind 9.x implementation under contract to the ISC).
  • DNS? Ha! (Score:5, Funny)

    by tang ( 179356 ) on Wednesday November 14, 2001 @01:46AM (#2561808)
    Real men surf the net using ip addresses. (And NOT in base 10)
  • djbdns and opennic (Score:5, Interesting)

    by SuperDuG ( 134989 ) <<be> <at> <eclec.tk>> on Wednesday November 14, 2001 @01:46AM (#2561809) Homepage Journal
    djbdns [cr.yp.to] states "I offer $500 to the first person to publicly report a verifiable security hole in the latest version of djbdns." ... and no one has claimed the $500 yet.

    Also OpenNIC [unrated.net] is an ICANN indepent root system ... why not just use them instead of ICANN?

  • DDOS network (Score:2, Informative)

    by metlin ( 258108 )
    I know this is slightly offtopic, but this was there on the bugtraq mailing list, thought ppl here may find it interesting:

    To: bugtraq@securityfocus.com
    Subject: Fwd: Possible DDOS network being built through ssh1 crc compromised hosts

    I am making this notification to assist in determining whether other
    folks have been affected by this attack.

    An associate's home NAT gateway linux box was hacked by what I am
    guessing was the ssh1 crc bug (ssh1 was the only exposed service).
    This
    machine looks to have been compromised on Nov 2nd at 1:15pm PST, I
    won't know for certain until I obtain his hard disk later today, and
    provided that /var logging is recoverable. This machine was running
    redhat 6.2, reasonably patched except for the fact that he was still
    running ssh1.

    It appears that someone may be building up a network of (potentially)
    DDOS hosts. I have done some quick research and found no matches for
    the signatures I have been able to identify so far.

    Using the Chkrootkit (www.chkrootkit.org) utilities did not identify
    a known trojan pack, so if this isn't identified in the wild, I'm
    already referring to it as the LIMPninja.

    It also appears that this particular host was used as a central host
    for other LIMPninja zombies. Also, haven't been able to determine
    what the command structure it is that the remote bots act upon.

    The following is by no means complete, even after a full examination
    of the drive has been completed, as there was never any file
    integrity base line completed(a shame).

    The attack appears to be scripted as all changes happened within a
    minute, except for the IRC server which was not installed until 2
    days later (and manually). When I found this particular irc net
    there were over 120 hosts all communicating via IRC. This host was
    found to be running an unrealircd daemon from /usr/bin/bin/u/src/ircd
    listening at port 6669.

    All other compromised hosts were joining this irc network
    (ircd.hola.mx holad) on channel #kujikiri with a channel key of
    'ninehandscutting'. All bots joined as the nick ninjaXXXX where XXXX
    is some RANDOM? selection of 4 upper case letters.

    Several ports were listening
    3879 term (this port had an ipchains rule blocking all external
    traffic - placed by the attacker's script)
    6669 ircd
    9706 term
    42121 inetd spawned in.telnetd

    Logs were wiped, and couldn't find a wiping utility so I'm thinking a
    simple rm or unlink was used, so I'm hoping to find more details when
    the disk is in hand. File modifications that were made follow:(not
    necessarily a complete analysis yet)

    clearly Trojaned binaries (probably others)
    /bin/ps
    /bin/netstat
    /bin/ls (this ls binary was hiding several things, directory
    structures named /u/, mysqld klogd ...)
    /usr/local/bin/sshd1 (the file was just several hundred bytes larger
    than previously)

    Binary file/directory additions
    /usr/bin/bin/u/ An entire directory structure containing the ircd
    server source
    /usr/bin/share/mysqld (looks like some type of irc spoofing proxy)
    /bin/klogd (almost looks like an ftp proxy)
    /bin/term (A bindshell of some sort)
    /usr/sbin/init.d was added and is exactly the same file size as term

    System configuration files that were modified/added
    /etc/hosts.allow made specific allowances for the .dk domain, as well
    as .cais.net .cais.com
    /etc/passwd two new accounts were added with the same password (des
    hashes -NOT MD5)
    /etc/shadow The added accounts were lpd 1212:1212, and admin 0:0
    /etc/inetd.conf 200+ lines of whitespace added, and then the single
    telnet entry
    /etc/services was modified for telnet to start on port 42121
    /etc/resolv.conf a new nameserver was added...
    /etc/psdevtab haven't examined closely yet
    /etc/rc.sysinit a line was added to start the /usr/sbin/init.d
    trojan/backdoor
    /etc/rc.local after much whitespace was added.... following lines at
    the bottom of the rc.local file

    killall -9 rpc.statd
    killall -9 gdm
    killall -9 gpm
    killall -9 lpd
    term
    klogd
    "/usr/bin/share/mysqld"
    /sbin/ipchains -I input -p tcp -d 0/0 3879 -j DENY

    -----
    This should assist other ppl who have had similar attacks...
    • Actually, the packet filters on my linux box have reported several instances of an apparent DNS DDOS - many packets addressed to port 53 from several different IP addresses over a relatively short period of time. The first instance I have of this is Oct 15, but that's all the farther back my logs go. Seems a little odd, though, since I don't run any DNS and haven't for months.
  • General Problems (Score:2, Insightful)

    by OzJimbob ( 129746 )
    There seem to be some pretty big problems in how the whole DNS system works in the first place; for a system with a fairly high degree of built-in redundancy, I've often found websites where ONE of their DNS servers has gone down, and I can't access the site. The other DNS somehow isn't queried, other caching DNS servers along the chain aren't queried, and it fails. The IP address I'm looking for is, in theory, sitting in a thousand caches all over the net, but it's not fetched? The loss of Microsoft's DNS a few months back is a good (although not particularly worrying) example.

    Then again, maybe I don't notice the times it DOES work like it's supposed to.

  • It's not difficult to get a nameserver backup and running, and the volume of data maintained by the root server is nothing in quantity compared to, for instance, .com.

    The main problem is that all the second-level servers have fixed pointers (usually hard-coded, I believe, in text files) to the root servers.

    Assuming some form of robust authentication could be worked out, this could be a killer app for IP multicast, where, if a root server goes down, once the replacement comes back up, the IP of that server gets instantly disseminated to all secondary level (or maybe even even futher down) nameservers around the world rather than manual notification (or however it works now), so that downtime would be minimal.

    Sound viable?
    • No, because that isn't how it works.

      First level and most second level nameservers don't do recursive queries: you can't ask them about anything not in their zones. You can't ask a second level DNS for the IP of a first level DNS (and it doesn't need to know that).

      Almost all DNS servers that do support recursive queries (e.g. the one your ISP lets you use) have a database with the IPs of the rootservers. Most DNS servers people run at their homes have that database (I have).

      You'd be multicasting those IPs to a whole lot more machines than just the second level servers, which don't need them anyway.

      This doesn't mean it won't work: most routers on the net already are connected to a multicast net. It could work that way for DNS servers too.

  • As long as no one opens their mouth about possible security leaks, we'll be safe.
  • by Jeremy Lee ( 9313 ) on Wednesday November 14, 2001 @02:16AM (#2561886) Homepage
    Don't get me wrong. It's a great system, it's worked for a very long time, it does it's basic job admirably. My single main issues with it are it's centralization, and increasing politicization.

    I've given this a little thought over the years. There's a few fundamental issues with the centralized DNS system.

    I've tried kicking around a few replacements ideas, like a peer-to-peer exchange system carrying certificates that act a little like resource search records.

    The FreeNet project actually gives a good model for how to distribute and search for these 'domain certificates'.

    I'd like to see a system that you essentially 'anonymously' submit namespace entries to. Conflicts are resolved based on context. If a dozen people want "money.domain", fine. If you try to browse to it without any context, you have to choose which one you want based on other information in the certificates (full name, location, nickname etc) and once you've chosen, that context sticks. URL's would need to be extended to also carry this context, which probably need to be a cryptographic signature to prevent abuse.

    It constantly amazes me that people are willing to pay $50 to 'own' a record in a database. The domain land grab was just stupid... in virtual space, you can always just make more land. As .info proves.

    DNS will obviously persist for decades, (simply because of the financial and general mindspace investment in 'dots') but hopefully as only one of a plethora of address resolution systems. Name resolution needs to be a pool, not a tree.

    "For as long as the DNS system exists, the Internet will never be free" - Morpheus, while very Drunk
    • I'm sorry, I want DNS to work instantly. FreeNet gives a great model for how to solve this problem if it were ok for DNS to take between 3 seconds and 3 minutes to resolve. 3 seconds is too long. Centralization is necesary. Redundant is good, but it should still be centralized. If anyone can tell me how a decentralized DNS system would allow fast lookups of uncommon names, then I'll change my mind.
      • DNS doesn't work instantly. Never has, never will. And with the profusion of names, it will just get slower. It's only the local caching which seems to make DNS fast. Throwing away that caching would just be stupid, so the only difference between any two schemes comes down to the time for "first discovery"

        I suggested the FreeNet system as a good conceptual base because of one P2P property which would be beneficial... the more a file is used, the wider it's replicated.

        DNS has a big advantage over other P2P systems that the 'files' it's trading are very small. As people have been mentioning, it's possible to download the whole DNS tree to a beefy laptop, uncompressed.

        Yes, if it's a really uncommon site that no-one has ever been to before, then the initial discovery might take whole minutes. Woo.

        DNS is slowly being broken by commercial interests. Eveyone knows it. Anything this vital to the internet is worth big money, and if it's centralized, that invariably leads to a power elite, which eventually takes the path of self-interest...

        To make a highly emotional analogy, the current DNS system is like an RIAA or MPAA in it's infancy. We now have the chance to turn off from that branch of time, that terrible future history, where it's illegal to host nonauthoritive records and Seattle has been nuked by the nameless. :^)
        • This is actually an idea I've been tossing around in my head for some time and might reserve to work on for a senior project or something. Instead of using P2P to distribute name changes or something, why don't we just have user-specific locally-cached DNS tables? You can continue to use DNS for new systems and places you haven't been to, but basically, it works by indexing metadata searches and allows you to assign your own names to things. For instance, let's say I already knew Slashdot's address. I can go to http://slashdot.org, and then assign it the name "Troll Land". From now on, I can just type in "Troll Land" and it'll open up Slashdot instead. Any URLs that direct to Slashdot.org will have Troll Land replaced on the fly (On the client end - Server-side code might be a bit difficult to work about, but then they're pushing the content to you anyway, so what's it to the client?).

          If you for some reason overwrite an actual DNS entry, by calling eBay www.yahoo.com, then the difference is made by just using the http:// header, or, in the case of this app, dns://www.yahoo.com. This way, users can organize the Internet the way it works for them. If they want to send URLs to someone, it's handled as a separate sort of context, and basically the inserted URL gets the actual DNS name reversed into it, so if I send you an AIM message linking you to Troll Land, it shows up as http://slashdot.org.
        • DNS doesn't work instantly. Never has, never will. And with the profusion of names, it will just get slower. It's only the local caching which seems to make DNS fast.
          We thought that. But it might not to be as true as you think.

          http://nms.lcs.mit.edu/papers/dns-imw2001.html [mit.edu]

          -Patrick

          • I'm *trying* to figure out what that paper means, and I think I'm getting it...

            The implication is that either the root servers account for all of the speed in the DNS system, and the caching doesn't make too much of a difference, or the implication is that a freenet/gnutella/whatever style distributed system would work just fine. I'm having trouble with the last sentence of the abstract:
            These results suggest that the performance of DNS is not as dependent on aggressive caching as is commonly believed, and that the widespread use of dynamic, low-TTL A-record bindings should not degrade DNS performance.
            What's a "dynamic, low-TTL A-record binding"?
    • .Info proves nothing. Our company just registered some .biz and .info domains, and I've advised against using them for anything important.

      .info and .biz basically turn into blackmail, of the form: "What if someone typed in your domain name, and they didn't get your site? It could happen to you, if they type in acme.biz and someone else has registered it. So pay us money and it won't happen." The domains themselves are fairly worthless, because you get funny looks if you use a .info or .biz tld.

      Web addresses should be memorable names. "yahoo" is easier to remember than "www.yahoo.org". And with www.*.com names, all people do remember is is "yahoo". The rest quickly becomes standard.

      For humans, "yahoo", "cnet" and "amazon" are all top-level domains. .info just makes things harder to remember. And a lot of the .info names are the same as their .com equivalents.

      Instead of creating new tlds that are mostly duplicates of existing tlds, we should be restricting domain ownership, so no legal person can own more than one domain. That should prevent people and companies from spamming DNS, so that good names remain available.
      • Instead of creating new tlds that are mostly duplicates of existing tlds, we should be restricting domain ownership, so no legal person can own more than one domain. That should prevent people and companies from spamming DNS, so that good names remain available.

        I think not. A friend of mine, age 18, runs a e-mail based contest site (20,000+ subscribers), his dad's law office site, and a general web production company. To demand that his contest or his web production sites be relegated to a lengthy URL is plain foolish, and no one will ever agree to it. Ever. Should Microsoft combine msn.com, microsoft.com and hotmail.com into one domain? VA Linux to combine Slashdot, valinux.com, sourceforge, newsforge and AnimeFu? (or whatever, I forget who owns what) Of course not! That would be a hassle to users and admin alike. A just plain silly notion, unrealistic and noxious to everyone involved with the Internet.

    • I think you need a global registration system, something analogous to the yellow pages, plus a localized system of shortcuts, something analogous to a roledex.

      Rather than a hierarchical naming system where you end up with names like joes-bakery.com, a particular Joe's Bakery would register with its address, company name, products, services, trademarks, etc. The first time you want to find Joe's Bakery, you need to provide enough information to identify the business. After that, the IP address can be locally cached with the registry information and "joes" could be a shortcut if it is a unique match.

      If you want the same shortcuts for a group of people, you should be able to have a secondary cache on a common server.

      There does not need to be a single global registry. There could be different registries with different performance characteristics and different accuracy of information. This way anybody can register anybody, and the market and personal preferences can decide rather than ICANN selling monopolies.

    • I've been kicking around the idea for a decentralized system based on cryptographic keys and web-of-trust. If a name doesn't resolve in explicitly or implictly trusted records, it would fall back to bind and finally to untrusted records.

      The biggest hurtle to implementing such a system is the learning curve for the cryptographic APIs for the languages I'd want to use. There is not a wealth of information on such APIs to begin with. The next biggest hurtle, of course, being that if it were developed inside the US, it'd probably be considered an act of terrorism to ship it outside the US.

    • Anything with conflicting name resolutions fails the "hassle-free email" test. I just want to send an email to an existing email address, I do NOT want to be hassled with trying to figure out some "context". Just think of all the existing code this would break!
  • Oh how much the world would be a better place if these technologies were implemented!
  • by miguel ( 7116 ) on Wednesday November 14, 2001 @02:24AM (#2561908) Homepage
    This time I will be prepared.

    I am downloading as we speak all the DNS records in the planet into my /etc/hosts file so I can be immune to the attacks

    I encourage others to do the same.
  • ICANN would like to replace the root DNS systems with secured servers.

    Ok, how long before someone at ICANN suggests that the servers should maintain domain to ip mappings in static files. Something like a file called hosts and that could be stored in /etc. Then, a patent would be granted for "a static internet address to domain name mapping system" and "a static domain name to internet address system"

    Sorry, I'm just in a sarcastic mood given the fact that they actually use bind. Does anyone find that a little scary?

    I know it's been brought up here on /. before, but there are many people who run their own DNS roots, underground dns if you will. Anyone have any links?
  • That's news to me. I always thought Network Solutions or whoever runs the other root name servers had their own proprietary and more robust and scalable DNS software.

  • Just an idea I had been mulling over. If the major search engines recorded the static IP addresses of the sites they indexed, then all we would need is the static IP addresses of the search engines loaded in our browser or hosts file.


    Not a complete solution, but it would be enough to keep the net going if DNS went down.

  • by kingdon ( 220100 ) on Wednesday November 14, 2001 @03:52AM (#2562177) Homepage

    Is it my imagination or is ICANN actually working on getting their job done rather than horribly complex politics (more complex than needed to solve the problem), or trademark/legal craziness? There's some background at the page of the ICANN DNS Root committee [icann.org].

    Now, I'm pretty skeptical that a closed source DNS server from Register.com is going to be a big part of the solution, but even that I don't really mind so much. Having a few alternatives is good if for no other reason than helping to keep BIND from stagnating.

    The article didn't talk much about DNSsec [nlnetlabs.nl] (or this older page [toad.com]) which has got to be part of the solution (to try to give the 10 second summary, when a client makes a DNS query and gets a response, it is kind of tricky to ensure that the response is really from the correct server, and DNSsec uses crypto to solve this and other problems).

  • 9 days of DNS hell (Score:2, Interesting)

    by Jeffv323 ( 317436 )
    This [sans.org] Is something I was just looking at... very interesting, shows what techniques have been used to hijack domains.

  • $ ping www.slashdot.org
    PING slashdot.org (64.28.67.150): 56 data bytes


    That's 64.28.67.150!! Start memorizing now!!!
    • With you pinging it! Stop that!

      Seriously, though, that works well when you've got one box sitting out there, but a lot of services install round robin DNS with multiple servers for load balancing. Try "dig yahoo.com" or lycos or google, for example. Socks3 here at work consists of about 9 servers, only three of which seem to work with any reliability.

  • by evilviper ( 135110 ) on Wednesday November 14, 2001 @04:16AM (#2562234) Journal
    The answer is simple, just ask the author of IPF how he did it...

    Change the BIND license to make it much more restrictive, then sit back as the OpenBSD developers build their own simpler, better, more stable, and much more secure, replacement.

    SSH.
    IPF.
    BIND?
  • This is clearly just a ploy to establish iron-fisted control over the internet. What is more likely to be for the best in the long run is an extensible, completely open holographic DNS schema distributed across each client. That the problem of DNS and the fecundity of PtoP have not been mated seems to me an absurd thing.

    This is the difference between hackers and bureaucrats in a nutshell. Centralized control over resiliant sophistication. God damn each of those bimbo sellout engineers for their short sightedness. If I had one ounce of say, one chance at effecting or affecting the logical and liberty enhancing solution I mention above, I could consider my life more or less comlete. (And I'm a card carrying member of ICANN at-large, dammit, and so much closer to such a goal than the bulk of you all!!) This is surely going to be the doom of the net as we know it.

    How long before the governing body (ICANN) of such a rigid and authoritarian system becomes a mere appendage of one of the big players (IBM, AOL, MSFT)? That ICANN is already rotten with corruption is apparent to almost everyone, but what I am asking is how long before even the lip service is discarded? I am aghast at the thought of a monopoly on basic existance that such moves as this do threaten.

    This is a call to arms. Anyone involve with open DNS or PtP should reply to this thread or email me at: this adress [mailto] to discuss superceding such insidious and freedom wrecking evil as presented in the parent story.

    Thankyou.
  • by Jordy ( 440 ) <.jordan. .at. .snocap.com.> on Wednesday November 14, 2001 @05:22AM (#2562397) Homepage
    Reading this article, I have to start wondering if maybe I'm misunderstanding the problem.

    The actual root servers are only queried for the top-level domains and while they have rather massive databases, the types of queries they get is limited.

    Now, I'm going to assume that given all the money collected for domains, there somewhere exists a nice pot of money available for running root DNS servers. If there isn't then something is seriously wrong with the administration of DNS.

    Segmentation of the actual root servers from the world by utilizing a front-end dns cache that would rewrite the actual DNS queries would solve a lot of problems.

    First, rewriting queries would allow an amazing amount of sanity checking to be done on the query itself and should prevent exploiting the back-end root servers directly.

    Second, as front-end dns caches can be extremely simple and require almost no configuration, the OS installation can be absolutely minimal excluding even shells. You could go as far as to use an OS that allowed you to revoke system privledges such as certain syscalls (fork, exec, open, etc aren't all that necessary once everything is running) and even make the caching DNS server run as init (though you must have something to bring up networking interfaces.)

    Physical segmentation is obviously important as well so a private backbone strung between all core root servers and a seperate interface on each front end cache to access them would help quite a bit.

    Of course then comes the issue of DoS attacks which again should be rather easy to solve considering what we are talking about. Just buy a lot of front-end cache systems. You would think given how important root servers are and how much money domain revenues generate, buying a thousand or even ten thousand machines and sticking them in every major network access point wouldn't be all that big of a deal.

    Now you still have to deal with the fact that most DNS servers still have a static list of root server IPs. Thankfully, the simple DNS queries that hit root servers can be done with a single UDP packet request and response (until you have to work up the hierarchy) making them prime targets for one of the many clustering solutions out there from simple IP sharing virtual servers to routing protocol tricks.

    Of course, I may be oversimplifying the problem.
    • Your comment was worth reading and is better than the others earlier in the thread (djbdns is trying to make cash on people's misunderstanding - and especially goes against the "open source" thing)

      I'm not sure if most people posting to this and other articles understands why dns is the way it is.

      The whole businsess about the "security" flaws are two fold:
      1. people don't patch their servers because they don't stay on top of things.
      2. most dns servers are not locked down properly (especially those of you using at&t's, worldcom's and other large telco's dns') against zone transfers which allow hackers to find out what you've got.

      DNS is a distributed database with a small lookup latency - this is very different than oracle, ldap and other structures. DNS is redundant and is designed to have broken branches (goes back to America's cold war days - even though bind is not that old!). The network, the data, and redundancy IS segment - have you every noticed that the root servers never came down - even for a massive virus - most dns outages come from your local ISP's caching dns, which could be running and old version of bind (single threaded mess).
  • Common implementations of DNS and even the protocol itself have quite a number of flaws which make DNS spoofing rather easy. DNS spoofing is targeted at the clients, and the root servers have nothing to do with it, so you can't solve this problem at the root servers. DNSSEC won't solve it completely either because no one expects clients to move to DNSSEC anytime soon (you don't install full resolvers on clients, either).

    In additions, occasionally, DNS database entries are wrong (although the servers are operating correctly), due to maintenance errors or social engineering attacks. Security on the root servers or even DNSSEC does not address this problem at all.

    So the best solution is not to base any authentication on DNS names at all. (Then there's hardly any need for DNSSEC either.) Of course, quite a few Internet users rely heavily on the non-existent DNS security. They fetch mail using unencrypted POP3, use HTTP-based mail solutions, and so on, and if someone is able to redirect their connections as a consequence of DNS spoofing, he can obtain their passwords pretty easily. But reasonable secure solutions (e.g. TLS and server certificates) already exist.
  • by chrysalis ( 50680 ) on Wednesday November 14, 2001 @07:21AM (#2562676) Homepage
    100 Gb hard disks are cheap nowadays, and almost all OS support > 2Gb files. So securing the DNS from the roots up is simple : have a local /etc/hosts file with all existing hosts.
    Then, subscribe to a mailing list that sends daily changes, so that you can keep your /etc/hosts file up to date.
    Ehm... yeah. You first have to secure mail to do this.

    • have a local /etc/hosts file with all existing hosts.

      Well, it worked for FidoNet. The FidoNet nodelist was esentially a huge /etc/hosts file with compressed diffs sent out once a week. Fortunately FidoNet started to shrink (due to the rise of the TCP/IP internet) just as the nodelist was starting to get really unmanageable.

    • have a local /etc/hosts file with all existing hosts


      You may joke, but on a small scale, it works. I have all of our production servers set up to use local hosts files. They don't need to know about anything outside of our production network, which is small enough and static enough that we simply don't need DNS, so we don't use it. There is no DNS traffic on our production network, and we're not vulnerable to DNS security flaws. On the rare occasions when we need to make changes, a simple script copies the new hosts file to each server with scp.

    • You first have to secure mail to do this.

      Actually, using secure mail would get tiresome, don't you think? What is needed is a mail user agent that will simply take the incoming mail, run it as root, and modify/add whatever files neccesary without admin or user intervention. Now THAT would be a time-saver, huh?
  • Poorly edited, poorly written. What was his conclusion anyway? Maybe I'm looking for too many technical details, but ending with "diversity improves security" implies that the solution is simply to replace *some* BIND servers with other servers. Yeah, that should work. Duh.

    He went on to argue that "most security holes are due to buggy software. All the cryptography in the world is not going to change the buggy software problem."

    In my experience, most security holes are caused by careless or ignorant users. Even if you take all the bugs out of all the software, there are still going to be security holes. Its like the locked doors at work: secure entrances are pointless if you hold the door open for the guy behind you (and you don't know the guy behind you).
  • it's quite funny actually. DJB has gained so much by creating qmail that when he released djbdns, users blindly followed into it expecting it to be void of security holes.

    the biggest problems with DNS on the internet have NOTHING to do with the software used. the protocol itself is quite insecure- and what's worse is that this isn't news!

    one thing that certainly needs to change is this silly concept of recursive-resolvers; they change the responses, and thus it's next to impossible to determine which is the "Real" resolver.

    thanks to sequence prediction, and because DNS servers/clients don't have any "other" protection, it's quite trivial to smash or alter someone's dns tables (during a zone transfer), or redirect users someplace else (when doing recursion).

    what we need is a cryptographic method of "signing" requests. root nameservers should maintain keys in addition to NS rrs. And what bind calls "root hints" should contain the keys of the root nameservers. this way, we can digitally sign responses so that their authenticity can be verified. moreover, if packet-space is limited (and even though a "most" queries should have a hundread or three bytes free) we could always just store a hash of the signature. but that's getting too far into implimentation.

    the basic droll is that we need something BETTER than dns... not just new software, but a new design...

    and plus, by implimenting crypto into the name services, we'll be able to finally keep the french off the internet.

    (for those of you lacking any kind of crypto-political background: the french aren't allowed ANY cryptography.... and you thought US export control was bad!)
  • I like that they mentioned that diversity results in more security (at the very end of the article). This is one of the major problems with Microsoft products: they only make two operating systems, so when a bug is found in one or both of them, the whole world goes down from some email script virus that a child can write. Under the alternative Linux and BSDs, there are differences between the distributions and even between installations, resulting in big headaches for would-be virus writers. (Sure, this also results in headaches for developers, but who said that making software is easy? Yeah, developing is allegedly "easy" under Microsoft platforms, potentially saving your business big dollars in R&D, but that money gets thrown away on the inevitable repairs necessary after some k1ddy in Congo or something manages to deploy a virus.)

    So, like the article says, diversity improves security. In my opinion, each site should choose the best system for the job and configure it to do that job well. If you end up with 10 different platforms and operating systems, so be it.

    Oh well.

    Oh yeah, so what I was trying to say is that not only the operating system, but also the software running on it, should be diverse and come from as many different sources as possible. I would even say that if you run several machines that perform the same job, perform that job with different software on the several machines. This way, when one gets cracked, the others continue to work (at least for a while).

Make it myself? But I'm a physical organic chemist!

Working...