Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software The Internet

Root-server switches from BIND to NSD 264

A Sorry End writes "It appears that one of the 13 root-servers, the core of DNS name resolution, have moved away from BIND to NSD since wednesday, Feb 19th, 2003, which is a Good Thing. Since the 26th of october 1990, all root-servers have been running BIND. According to this message, this change was designed to increase the diversity of software in the root name server system, the lack of which is widely considered to be a potential vulnerability. The nsd software has been designed from scratch specifically as an authoritative name server. It has no design commonalities with bind, the currently prevalent DNS implementation. In addition to that nsd provides a significant increase in the performance reserve of k.root-servers.net. NSD was developed at NLnet Labs in coorperation with RIPE."
This discussion has been archived. No new comments can be posted.

Root-server switches from BIND to NSD

Comments Filter:
  • Hehe (Score:2, Funny)

    by zapfie ( 560589 )
    Well.. I guess they're not in a BIND anymore!

    ...god, that was the worst joke ever. Someone shoot me.
    • Re:Hehe (Score:3, Funny)

      by DonkeyJimmy ( 599788 )
      ...god, that was the worst joke ever. Someone shoot me.

      I would shoot you, but I can't find you because your name isn't resolving for some reason.
    • Re:Hehe (Score:3, Funny)

      by teamhasnoi ( 554944 )
      However, the admins of the new servers seem to have NSD cold.

      I may have taken that bullet for you...

  • Diversity? (Score:3, Funny)

    by Anonymous Coward on Tuesday February 25, 2003 @01:04PM (#5379629)
    Man, my company would be majorly pissed off. We don't want diversity, we want conformity!! All systems should be running one OS for ease of administration and that OS should be Windows2000. Thankfully I'm offsite and use Linux. ;-)
  • So how secure is it? (Score:5, Interesting)

    by modemboy ( 233342 ) on Tuesday February 25, 2003 @01:06PM (#5379645)
    Anyone familiar with NSD care to comment on how secure it is? Are we diversifying just for the sake of diversifying or is it as secure as BIND?
    • by Anonymous Coward on Tuesday February 25, 2003 @01:08PM (#5379669)
      Because BIND is a rock of stability.
    • by johnnyb ( 4816 ) <jonathan@bartlettpublishing.com> on Tuesday February 25, 2003 @01:29PM (#5379850) Homepage
      Diversifying for the sake of diversifying is still useful. If person A finds a flaw in one of the two systems, the rest are still functioning. This requires an attacker to have exploits for all systems, not just one. The diversity itself is a barrier.
      • As it has been said in other posts, it depends on the deployment and on the attack purpose.

        If the attack purpose is a DOS, software diversity helps in preventing that your whole system is killed by a single exploit. But if the attack purpose is to crack a machine on your network to run some trojan and/or spyware, software diversity only means that the attacker has more chances to find an hole.

        Now, it would be different if they diversify the CPU, since most of the exploit code around is platform-dependent: keeping alive some Alphas to run some of the root DNS whould be wise from a security POV (although maybe not from other POVs).

        Thinking of it, it would be nice if compilers could generate (randomly) different - but working - binary code from the same sources. You would have a single source to scrutinize for security holes, but generating different binaries on different critical machine would limit the risk of monoculture.

        • " Thinking of it, it would be nice if compilers could generate (randomly) different - but working - binary code from the same sources."

          Actually, you can through compiler switches.

          However, this doesn't help any. It may mean that someone may need to write multiple versions of the same exploit, but the exploit will be there in all versions.

          Even in the case you mention - cracking a machine - you still have only 1 machine cracked versus the whole bunch. At least afterwords you would have at least 1 known-good copy of the data.
    • by b!arg ( 622192 )
      Diversifying for the sake of diversifying is good in this sense. Let's think back to 7th grade biology and natural selection. If the genetic makeup of all animals of a species was the same, then one disease could kill off the entire species. But the diversity of the genetic code makes it possible for the species to survive that traumatic event. Some animals will get killed off, but others that have a way to fight it, won't. Now replace "genetic code" with "OS", "disease" with "hacker" and "species" with "network" and you've got yourself a concept.
  • Open Source (Score:5, Informative)

    by Greedo ( 304385 ) on Tuesday February 25, 2003 @01:06PM (#5379648) Homepage Journal
    From their site:
    NSD is an authoratative only, high performance, simple and open source name server.
    But further down:
    The betas and releases of NSD are distributed under freeware BSD license, however we require the alpha testers to:
    • test the software within reasonable timeframe
    • provide NLnet Labs with feedback and bug reports in a timely manner
    • not disclose the source to any third party without NLnet Labs concent [sic]
    • destroy obsolete versions of the alpha code on request
    So, I'm wondering, when this comes out of beta, will it still be open source? Running diverse software on the roots is probably a Good Thing, but security through obscurity isn't, so I hope they aren't trading one kind of vulnerability for another.
    • Re:Open Source (Score:5, Informative)

      by gmuslera ( 3436 ) on Tuesday February 25, 2003 @01:13PM (#5379713) Homepage Journal
      You said that... "betas AND releases" are under BSD license.
    • Re:Open Source (Score:2, Interesting)

      by dainkenkind ( 562928 )
      Although it has been beaten to death here at Slashdot, and although it may be an unpopular ideology here, open source does not necessarily equal less bugs, and the converse is also true. What is definately good though, is diversity, which will make taking down all of the authoratative name servers with a single exploit much more difficult. The fact that it is not at all based on BIND code will hopefully mean that it isn't vulnerable to any of the same attacks, allowing us to resolve names all day long without worrying about having to use those nasty binary octets to connect to our favorite pr0n hosting servers.
    • Re:Open Source (Score:4, Insightful)

      by arvindn ( 542080 ) on Tuesday February 25, 2003 @01:30PM (#5379862) Homepage Journal
      (slightly OT)

      You brought up a very good point. "Open source" which you're not allowed to disclose.

      Which is why RMS asks us to distinguish between Free as in freedom and open source as in buzzword compatible.

      IMHO, what the license restriction probably shows is that deep down, they really believe that less eyes on the source == better. They probably want to achieve some kind of "middle ground" (shared source anyone?) with bug fixers getting to look at the code but not "hackers".

      Not to be too harsh, just bitter that they follow the letter but not the spirit of open source.

      • I 'spect that they're trying to insure that any bugs in the alpha version do not get a chance to live on in stale copies or in somebody else's code. Makes sense that they would want as few bugs as possible with their name on them.
    • Re:Open Source (Score:5, Informative)

      by copterdoc ( 95272 ) on Tuesday February 25, 2003 @01:37PM (#5379916)
      You can find the source here:
      http://www.nlnetlabs.nl/nsd/index.html
    • Re:Open Source (Score:5, Informative)

      by Anonymous Coward on Tuesday February 25, 2003 @01:48PM (#5379992)
      nsd will remain open source.

      Daniel Karrenberg
      daniel.karrenberg@ripe.net
    • Re:Open Source (Score:5, Insightful)

      by babbage ( 61057 ) <cdeversNO@SPAMcis.usouthal.edu> on Tuesday February 25, 2003 @03:46PM (#5381058) Homepage Journal
      Running diverse software on the roots is probably a Good Thing, but security through obscurity isn't

      Man this is such a false meme, where did it get started? Obscurity by itself is questionable security, but as a component of a multi-layered security strategy it's perfectly reasonable.

      • Security by obscurity is your world-readable /etc/passwd file, with the password data either hashed (obscured) or moved to the shadow file (also obscured). (And if your shadow password file isn't world readable, that's just more obscurity.)

      • Security by obscurity is the fact that most people don't have the names & addresses of the personnel running the US military's nuclear weapons systems so that these people can't be blackmailed. Maybe these people can be trusted not to betray their country under torture and such, but keeping their identities non-public -- an obscurity measure -- is important too.

      • Security by obscurity is Dick Cheney's "undisclosed location" (*cough* [boston.com] Greenbrier [krusch.com] Resort [gettingit.com], White [atomictourist.com] Sulphur [wunderground.com] Springs [unitedcountry.com], West [google.com] Virginia [google.com])

      • Security by obscurity is restricting access to your company's co-location facility, so that untrusted people can't get physical access to your equipment.

      In short, in a broad sense, "security by obscurity" is a lot of good ideas, when you think about it. Any of these ideas can be an Achilles heel, but the solution there is not to cut off the heel altogether, but to wear sensible shoes when going out in the wilderness :)

      To get back to the original topic, obscurity is a perfectly good tactic for the people running these DNS servers as part of their overall strategy for protecting the system. It's perfectly reasonable for certain aspects of their systems, processes, etc to be kept on a need to know basis. Sure, there is a benefit to keeping software source open as a security measure, though the benefit of doing that is debatable (and no, I'm not going to be the one to debate it -- I agree that it's generally a good idea but can understand some of the objections). But in this case, where the software is a black box to the outside world, and it's explicitly *not* meant for general DNS use (it's meant for authoritative servers only!) I don't see any particular harm in keeping their doors locked down pretty well.

      Not that they're doing that in the first place. As another reply noted, you yourself write that both the betas & release will be available under a BSD style license :-)

      But moreover, your objections are I think misplaced -- as are most of the people that blindly parrot the "obscurity is bad" meme. Think about what you're saying -- it really doesn't hold up to scrutiny.

      • In your argument you've considerably broadened the definitions of both security and obscurity beyond how they are generally used in this context. Amongst other things, according to your logic any sort of encryption at all is 'security through obscurity' because technically the information is obscured. That is clearly absurd and not the intent of the phrase.

        Note that I happen to agree than in some cases obscurity can be a reasonable additional layer of security, but the the essence of your point is, dare I say it, obscured by your changes in definition.
        • What can I say, it doesn't take much to reduce an already absurd slogan to more obvious absurdity :-)

          My real point, which I think you're well aware of, is that pat, convenient slogans like this are often too simplistic, and there's a danger that by taking them to heart you're taking away the wrong lesson.

          By broadening the definition, my hope is that people will think a little more about parroting things like this and consider that, in this case, security by obscurity *isn't* per se a bad thing. It has a place, a role, and proper ways to apply it. Having it as the first & only line of defence usually isn't one of the proper ways, but as part of a balanced security plan it can fit in very effectively.

  • So previously, a vulnerability in one piece of software would allow the whole system to crash or otherwise be compromised. Now a vulnerability in one of two pieces of software will allow part of the system to be compromised. If the only risk were lack of service, this would be a good thing. However, the risk also includes providing malicious service. I could see some people wanting to redirect all DNS querries so that the result would point to some site of questionable virtue. Doing so may have just been made easier.
    • by cmburns69 ( 169686 ) on Tuesday February 25, 2003 @01:13PM (#5379718) Homepage Journal
      But previously if you learned of a BIND vulnerability, you could hijack ALL of the root servers, redirecting 100% of requests to your site. Now, if there is a single vulnerability in either system the hijacking could only affect a portion of the system, not the entire internet.

      An online Starcraft RPG? Only at [netnexus.com]
      • But previously if you learned of a BIND vulnerability, you could hijack ALL of the root servers, redirecting 100% of requests to your site.

        I'm not sure that is exactly the attack everyone is concerned about. More likely they'd point the firehose at, you know, someone else's site... I'll leave it to the trolls to suggest which sites.
    • Doing so may have just been made easier.

      How is going to a different dns server make this easier now? Are you saying that NSD somehow makes this easier? Is this a known vulnerability in NSD?
      • by cduffy ( 652 ) <charles+slashdot@dyfis.net> on Tuesday February 25, 2003 @01:29PM (#5379848)
        The argument isn't specific to NSD, but rather general to all cases where a wider array of software is deployed to avoid a monoculture for security reasons: In places where previously BIND needed to be compromised to affect an attack, now either BIND *or* NSD may be compromised. That's not to say that such an attack is necessarily easy with NSD, or that it has known vulnerabilities -- simply that *if* a vulnerability is discovered in one of *two* packages, this can be translated into such a larger attack (rather than having only a single point of vulnerability).

        Running a wider array of software on the root nameservers is still almost certainly a Good Thing , and decreases the probability than all of the servers will be prone to any given vulnerability -- but also increases the probability that a vulnerability will be found such that some subset of the servers is prone.
        • Running a wider array of software on the root nameservers is still almost certainly a Good Thing , and decreases the probability than all of the servers will be prone to any given vulnerability -- but also increases the probability that a vulnerability will be found such that some subset of the servers is prone.

          Unless you can argue that there is a finite number of potential vulnerabilities, and that the number is sufficiently small that they are well within the capacity of attackers to exhaustively exploit, I'm not sure how well your logic bears out. This multiplicative math only works with known/finite probabilities.

          It could well be argued that there is a near-infinite supply of potential vulnerabilities, and that X cracking effort yields N holes. In this scenario it the overall chance of any single compromise doesn't increase with the diversity of products. But the susceptibility of the entire system to any given attack does decrease. That makes this change a win.

          • I am inclined to argue that there is a finite number of vulnerabilities, and that the number of them which are reasonably and profitably located and exploited is particularly limited.

            Simply put: The set of possible inputs to a block of code (excluding cases where input lengths are unbounded -- something DNS software should never allow) is finite; hence, there is a finite set of correctly-handled cases, a finite set of error cases which are gracefully handled, and a finite set of error cases which are incorrectly handled. That last set can indeed be located, and if the expense is warranted can even be eliminated and proven to be so.

            As for the argument that the supply of potential vulnerabilities is near-infinite, I am disinclined to believe this so long as certain reasonable assumptions regarding the operating environment can be made. If you care to put forth an argument, however, by all means feel free to do so.
  • by hpulley ( 587866 ) <hpulley4&yahoo,com> on Tuesday February 25, 2003 @01:07PM (#5379656) Homepage

    Having no diversity means you are ripe for an epidemic.

    • Sounds like current world Banana crops, which are genetically identical, for all intents and purposes (since they are reproduced asexually). Already there is fear of a disease which will wipe out world banana production. Fun stuff!

      Yes, this is OT, and I like it!
  • by GeorgeK ( 642310 ) on Tuesday February 25, 2003 @01:08PM (#5379665) Homepage

    As of last year, Verisign has been running ATLAS, instead of BIND, for DNS. See the story here [nwfusion.com].

    • by Greedo ( 304385 ) on Tuesday February 25, 2003 @01:19PM (#5379770) Homepage Journal
      VeriSign is replacing an open source software package called Berkeley Internet Name Domain (BIND) with its own proprietary technology. Dubbed ATLAS, for Advanced Transaction Look-up and Signaling, VeriSign's proprietary software will be installed in its 13 DNS server sites around the globe this summer and will go into production mode in the fall.

      Well, I guess one of those 13 server sites (I assume they mean the roots) isn't running ATLAS now, is it?

      And again with the proprietary software! Verisign has a bad enough reputation already. Now they expect us to trust the security of their closed software ... great.
      • by Matty_ ( 74368 ) on Tuesday February 25, 2003 @02:03PM (#5380140)
        The server which is getting the new software is k.root-servers.net, which is managed by RIPE in London, UK. It handles DNS for the "." (root) domain.

        Verisign does not run this server, but they do have their own DNS servers which handle DNS for TLD's such as "com" and "net" -- and is totally seperate from the root servers. I am sure all of those systems are still running ATLAS.

        (Of course, if I recall correctly, VeriSign does manage one of the 13 root servers. I think it is a.root-servers.net, but I may be wrong.)
      • Well, I guess one of those 13 server sites (I assume they mean the roots) isn't running ATLAS now, is it?

        I assume that you assume incorrectly.

        There are also 13 gTLD servers, in addition to the 13 root servers: [a-m].gtld-servers.net are authoritative for the .com and .net gTLDs. Interestingly, it looks like the root servers are authoritative for .mil? Odd.

        Verisign apparently also has [a-g,l-m].nstld.com, which are authoritative for .org, .edu, .gov, and [a,f,g,l].nstld.com share authority of .name with ns[1-3].nic.name.
  • Diying? (Score:5, Insightful)

    by ricbasto ( 203726 ) on Tuesday February 25, 2003 @01:08PM (#5379670) Homepage
    I don't think the plan is to migrate away from BIND, but instead to protect the root-servers from a bind-specific exploit.
    There will be years for BIND to loose it's marketshare.
  • by phorm ( 591458 ) on Tuesday February 25, 2003 @01:09PM (#5379682) Journal
    If a lot of /.'ers are like me, then there are probably a lot of people who don't know the technical differences between BIND and NSD. Can somebody whack us with the proverbial cluestick as to the improvements of NSD over BIND (exempting of course, the mentioned fact that NSD was built from scratch).

    I wonder what anomalies, if any, we may see from switching over. Also, as the article mentions that NSD has no design commonalities with BIND, I wonder how many of the tech personnel are knowledgable on the new system...not that a nameserver, even a root one, should be overtly complicated (except for evading DOS attacks)
    • by etcshadow ( 579275 ) on Tuesday February 25, 2003 @01:19PM (#5379773)
      Well, the biggest difference has got to be that the two are built for completely different purposes.

      BIND is a general purpose name server for use anywhere in the hierarchical dns scheme. That is, in simplest terms, it accepts requests from below, and either serves them or passes the query up (hierarchy = tree).

      NSD, according to what is being said is for *authoritative servers only*. That is, it only serves requests, it never passes them up (because it only runs at the nood nodes). It may be true that they intend to make it a general purpose name daemon in the future, but at least for right now, it just simply does not do all of the different things that bind does. One might guess that, because it does fewer things, it does them better, but I sure as hell don't know that to be the case.
      • by phorm ( 591458 )
        my guess would be... the less extraneous things it does - the better. As long as the the root nameserver serves out addresses, it is accomplishing its purpose. If BIND does things above and beyond, it's just allowing more ways to possibly break or slow down the system.
      • Authoritative servers run at more than just the root servers, every domain would have two or more of them. What NSD isn't used for is client resolutions, so you wouldn't point your desktop machine at a NSD server.
    • by Anonymous Coward on Tuesday February 25, 2003 @01:37PM (#5379917)
      If you download the source tarball from the NSD site linked in the article and expand it, you'll find a DIFFERENCES document. It's a summary of observed differences between BIND 8.2.2-REL and NSD 1.0.1 written by Daniel Karrenberg at RIPE.

      I'm scanning through it right now, and it looks like the main differences are:
      NSD is Authoritative only. It doesn't pass requests to other servers.
      NSD is quieter in the sense that if you send it a request which it refuses (like an update), it simply returns a Refused message and not the content of the update request. BIND does. This is considered a weakness in BIND that could make it susceptible to DoS attacks.
      There are a number of different interpretations of the RFCs between BIND and NSD which I don't understand.
      • by Anonymous Coward on Tuesday February 25, 2003 @01:52PM (#5380026)
        We did quite some testing comparing responses
        to millions of both real world and artificial
        queries. None of the differences observed are
        material enough to be noticed by common resolvers
        and much less any applications or even users.

        Daniel Karrenberg
        daniel.karrenberg@ripe.net
  • by $$$$$exyGal ( 638164 ) on Tuesday February 25, 2003 @01:10PM (#5379689) Homepage Journal
    It has no design commonalities with bind...

    This is great, but I wanted to point out that the new software does have design commonalities with bind. The way I see it, they both support the same external interface, but they have different implementations.

    --sex [slashdot.org]

    • This is great, but I wanted to point out that the new software does have design commonalities with bind. The way I see it, they both support the same external interface, but they have different implementations.

      By your definition, Apache and IIS share design commonalities. :)

    • Bingo, the same interface, different implementations. And when they said "has no design commonalities", I will guarantee you, they were referring to the implementation. So I fail to see your point... unless you're concerned that the interface itself is insecure, which seems incredibly unlikely to me.
  • by Anonymous Coward on Tuesday February 25, 2003 @01:13PM (#5379719)
    Really, look at all the advantages of djbdns:

    * free software, under the BSD license (makes it easy to redistribute binaries)

    * easy package-based installer (easy to find everything, or to install djbdns in different locations)

    * easy to configure with a single config file

    * great support from the author, who's a really friendly guy.

    Oh wait. NONE OF THAT IS TRUE. Never mind.
    • by bourne ( 539955 ) on Tuesday February 25, 2003 @01:47PM (#5379990)

      Oh wait. NONE OF THAT IS TRUE. Never mind.

      You're absolutely right. djbdns doesn't have anything going for it except exceptional security and performance, and why would a root name server need that?

    • It sounds very similar to tinydns. Most of the slides in their presentation [ripe.net] look as though they might have been taken from a tinydns presentation, including:
      • authoritative only makes for simpler software, higher performance, increased security, more robust software
      • load/reload entire db, with very fast load times, and no incremental changes at runtime
      • axfr offband (it is not clear how they do this, but it sounds as though nsd is not doing this, and neither does tinydns, it can be done better with other programs (such as rsync)) IIRC, many flames have ignited over tinydns's AXFR support (or supposed lack thereof), and it seems as though the nsd developers chose a design.
      One can reasonably ask, if it is so similar, why reinvent tinydns? If it's good that one root server is running nsd, why not implement tinydns on another root server? Or how does nsd differ from tinydns?
    • by radish ( 98371 ) on Tuesday February 25, 2003 @02:33PM (#5380436) Homepage
      :-)

      I recently installed DNS on my local net for the first time (had been making do with hosts files until then). It seemed I saw a new BIND problem every week, so I thought I'd give djbdns a go, it looked pretty straight forward and I like works like "secure". It did take a full evening to install (longer that I would have hoped) and yes, that service manager thing is a ROYAL piece of annoying crap, but once it was up and running I've had exactly zero problems with it.

      IMHO djb seems to write some pretty decent code, the apps themselves seem well designed, but he REALLY needs to get with the programme re: installers, and not re-inventing the wheel just because he reckons he can do it 0.001% better than everyone else (svcmgr).
      • by blakestah ( 91866 ) <blakestah@gmail.com> on Tuesday February 25, 2003 @02:50PM (#5380581) Homepage
        not re-inventing the wheel just because he reckons he can do it 0.001% better than everyone else (svcmgr).

        Exactly what is svcmgr replacing that it only does things 0.0001% better than?

        Phrased more simply, what exactly is there to check that service daemons are running, and starts them if they are not?

        Whereas daemontools replaces init scripts, it also does the job of checking that services are still running, and starts them if they are not. It is a very useful daemon - a supervising master daemon to watch all the other daemons, because time has shown that daemons aren't very good at watching themselves.
      • The reason why installing DJBDNS seems difficult is usually because people are used to BIND and djbdns does things differently - different programs to do different things. The other possible reason is the person installing doesn't really understand DNS.

        It took me some time to understand djbdns, but that's not surprising - learning something new always takes time.

        BIND really isn't easier to me - look at the config files.
  • by Bull999999 ( 652264 ) on Tuesday February 25, 2003 @01:14PM (#5379727) Journal
    What they didn't tell you was that the move was mostly due to affirmative action, to ensure diversity on the Internet. Why do you think that IIS is still hanging around?

    Affirmative action: More than just for humans.
  • by green pizza ( 159161 ) on Tuesday February 25, 2003 @01:18PM (#5379755) Homepage
    AFAIK, the root-servers were using an old, patched version of BIND 8. They never migrated to the much more secure BIND 9 (which, btw, was not vulnerable to the recent BIND security problems).

    That said, their software change sounds like a good idea, but isn't for everyone. There's nothing wrong with BIND 9 and I plan on sticking with it for years to come.

    I don't think I can say the same thing about sendmail...
    • by winkydink ( 650484 ) <sv.dude@gmail.com> on Tuesday February 25, 2003 @01:29PM (#5379854) Homepage Journal
      Huh? I know for a fact that at least one root nameserver has been running BIND 9 since early beta.
    • AFAIK, the root-servers were using an old, patched version of BIND 8. They never migrated to the much more secure BIND 9 (which, btw, was not vulnerable to the recent BIND security problems).

      One of the root servers was moved over to BIND 9 before the final release. Made me kinda nervous, cuz the final release of BIND 9 was somewhat buggy and had several features missing. Pretty effective way to beta test, I suppose, and it seems to have worked out OK.
    • I don't think I can say the same thing about sendmail...

      Only because you stopped running it at 8.6.something, which was about as current as BIND 4.9. If you'd keep up with sendmail like you kept up with BIND, you'd have a different opinion.

      If the last exposure to BIND you had was BIND 4.9 you'd be dissing the current BIND instead of the current sendmail.

      It's all about what you use and what you are comfortable with and knowledgeable about.

      Except for wu-ftpd of course. There's no hope for that pile! ;)
  • by binaryDigit ( 557647 ) on Tuesday February 25, 2003 @01:26PM (#5379829)
    I think they should replace the root dns servers with an old fashion switchboard. I envision a large room in the bowels of VeriSign "manned" by an army of women wearing grey suits with horn rimmed glasses. A dns request will come in via pnuematic tube, the operator will pull one spring loaded ethernet cable from her console and plug it into the correct corresponding jack.

    While being resistant to any port based DDOS attacks, they would be DOSable by having some hunky dude drink a pepsi outside their window.
    • Not a problem. They don't have Windows. :)

      Mmmmm.. Your dialogue is sounding like one of my fantasies.. You forgot about the part where after work, she takes off her glasses, and lets her hair down, and the clothes come flying off .. hehe
  • Diversity is good (Score:5, Insightful)

    by MojoRilla ( 591502 ) on Tuesday February 25, 2003 @01:34PM (#5379899)
    Competition is a good thing. See Intel vs. AMD, Sony vs. Nintendo, Linux vs. Microsoft.

    For very high reliability software, competition is also used. For example, the space shuttle uses four sets of identical software on four sets of hardware that vote on results, with a fifth set running completely different software waiting to take over if the other fail (see Fastcompany [fastcompany.com] for more details).

    Also, one of the benefits of breaking up Ma Bell was that one company, with one set of software, was no longer running the telephone system in the United States.

    In the long run I think this is a very good thing. In the short run, however, there might be problems.
    • Also, one of the benefits of breaking up Ma Bell was that one company, with one set of software, was no longer running the telephone system in the United States.

      Uhh, except that now we have four, and they still haven't gotten rid of the original software. I used to know what it was called, I'm trying to remember - anyone know?
  • by ruiner13 ( 527499 ) on Tuesday February 25, 2003 @01:42PM (#5379954) Homepage
    Wow, all those acronyms are making my head spin. Sigh. What does the NSA think about the DNS servers switching from BIND to SND? Does it make thier TPS reports PDQ? I'd sure hope it uses SQL somehow too.
  • by Neillparatzo ( 530968 ) on Tuesday February 25, 2003 @01:46PM (#5379982)
    Isn't it bad luck to have 13 root servers?

    I mean if you're going to be superstitious to the point of worrying about code diversity or eyeballs-per-source-file, I think this is an issue that needs to be addressed.

    • Only... (Score:5, Funny)

      by devphil ( 51341 ) on Tuesday February 25, 2003 @01:56PM (#5380059) Homepage


      ...if the 14th is named bilbo.root-servers.net, and is added specifically for the purpose of breaking the bad luck.

      Sorry, heavy geek moment there.

    • > Isn't it bad luck to have 13 root servers?

      No, but it's bad luck to be superstitious.

  • by Misuzu ( 591402 ) on Tuesday February 25, 2003 @01:50PM (#5380012)
    Perhaps this is a silly question, but I am curious...

    The article states: "K will answer either using bind8 or nsd".

    How does one go about identifying which software is in use at a DNS server? Is there a piece of data transmited with the response - like with webbrowsers to indicate which version/browser was used to make the request?

    Perhaps the article is talking in an abstract manner about how the server will respond, and not in a litteral way - and such a feature of DNS does not exist.

    (I cant see how it would NEED to exist frankly. People want to know the IP address of the name they are looking up - not what piece of software is being used to retrieve it)

    Nevertheless, I am curious.
    • by Anonymous Coward
      dig @server.example.com. version.bind chaos txt

      dig @server.example.com. authors.bind chaos txt


      Compare results. If you get a response for both queries (including REFUSED and SERVFAIL), then it's probably BIND 9. If you get a response for the first query (including REFUSED and SERVFAIL) and not the second, then it's probably BIND-8. If you don't get a response for either query, it's probably an old version of BIND-4.9.2 (or below), or Microsoft DNS (which is based on BIND-4.9.2 or below).

    • > The article states: "K will answer either using bind8 or nsd".

      Actually, the full quote is:
      : During the cut-over period, K will answer either using bind8 or nsd
      ( There is a load balancer in front of a number of machines performing the K-root function )

      To answer your other question, being:

      > How does one go about identifying which software is in use at a DNS server?

      There are three methods. One is the defacto which was introduced with BIND4.mumble, by which you could send a TXT query of 'version.bind' to the nameserver, and it may answer with the actual version (depending on how the local administrators set it up - ref BIND documentation for further details).

      Another method is currently going through the IETF [ietf.org] process as draft-dnsop-serverid [ietf.org], and consists of sending a similar query for 'version.server'. (ref the draft for further specifics). NSD replies to this method since it is not BIND.

      The third method is analysis of how the nameserver replies to queries. Even between BIND versions there are a variety of subtle differences in the packet that you get back.

      But, we haven't answered the 'why'. One reason is if you are tracking obscure protocol bugs from servers not under your control. Another is purely for local administration and tracking which nameserver is doing what.

  • Security? (Score:2, Interesting)

    by kryps ( 321347 )
    I have my doubts regarding the security design of nsd. Quoting from the nsd-1.0.2 distribution TODO file:

    [...]

    TESTS
    - set all the buffer sizes to ridiculously small values see if it causes core dumps
    [...]


    -- kryps
    • Well, how else do you propose to test for buffer overflows? Sure, you can do your best to ensure they aren't written in the first place. You can audit your code to catch any of these things. But when it comes down to it, you should probably test for it somehow...
  • Waste of effort (Score:5, Insightful)

    by iamacat ( 583406 ) on Tuesday February 25, 2003 @02:44PM (#5380520)
    One bad thing is that NSD is only for authoratitive name servers, so the efforts spent developing, debugging, porting and optimizing its code will be wasted for most of us. More people using NSD would also mean that any security exploits are discovered faster and on less important systems than root name servers. And couldn't we all use a lightweight, secure, chaching name server to use over a dialup connection?

    I would rather see them pick some alternative general purpose DNS implementation and optimize it for their needs.

    • One bad thing is that NSD is only for authoratitive name servers, so the efforts spent developing, debugging, porting and optimizing its code will be wasted for most of us.


      Ooh, is that flamebait I smell?

      It's long since been agreed upon that combining authoritative and recursive name services is a terrible idea. With that in mind, is there any reason for your authoritative DNS server to contain recursive code? Do recursive servers (for most of us?) benefit at all from the authoritative code?

      Most of us in the scope that you use it may never realize incredible benefits from software targeted at most of us network operators. Just because the effort spent developing this software will not directly benefit you doesn't mean it was wasted.

      to use over a dialup connection?


      Oh, I see. It was flamebait, after all.

      Mark
  • IIRC, the large-scale attack on the root servers was a simple ping flood. Changing the software that the root servers run will not mitigate the same or a similar type of attack.

    I think the internet is going to have to move to a tiered approach of trusted and untrusted networks. Unfettered access was okay when there were only a few systems connected but with millions of users, trust is something you must earn. If a network or ISP allows a user to spoof the packets they send than that network or ISP should be labeled untrustworthy and their QOS should reflect that. Legitimate users will eventually migrate to trusted nets and ISPs.

    The internet should be a priveledge ala driving. If you can't be trusted with that priveledge than your access should be revoked or at least severely limited.

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...