Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

Verisign Plans DNS Changes 161

NetWizard writes "According to a recent NANOG post and an InfoWorld story, 'Verisign will change the serial number format and "minimum" value in the .com and .net zones' SOA records on or shortly after 9 February 2004'. They seemed to have learned their lesson, from the post: 'There should be no end-user impact resulting from these changes (though it's conceivable that some people have processes that rely on the semantics of the .com/.net serial number.) But because these zones are widely used and closely watched, we want to let the Internet community know about the changes in advance.)'"
This discussion has been archived. No new comments can be posted.

Verisign Plans DNS Changes

Comments Filter:
  • Stop Changing DNS (Score:3, Insightful)

    by Blackknight ( 25168 ) on Saturday January 10, 2004 @09:20AM (#7937094) Homepage
    God damn it ICANN, you need to take away Verisign's authority over DNS. Every time they change something it's a major pain in the ass for anybody that works in an ISP, web hosting, etc.

    STOP FUCKING CHANGING THINGS!
    • I don't see anything wrong with this particular change. Even if they'd not announced it it wouldn't have broken anything that should have been using it.
    • by Anonymous Coward on Saturday January 10, 2004 @09:32AM (#7937125)
      How the hell will this be a pain in the ass? Any software that relies on .com's serial number remaining static is broken and needs to be fixed. Complain to the software developers, as Verisign is not at fault this time.
      • by CarrionBird ( 589738 ) on Saturday January 10, 2004 @10:01AM (#7937185) Journal
        Maybe, but everything is working now, and there's no reason to change it other than breaking these "broken" programs.
        • by jrumney ( 197329 ) on Saturday January 10, 2004 @10:57AM (#7937329) Homepage
          Reading between the lines, it looks to me like Verisign want to start providing real time DNS updates, in which case there is a reason for change it. Currently they update the database twice a day, which is well within the limits of the current serial number scheme. But with real time updates, they could easily get to 100 updates in a day.
          • If thats the case, then it's a good thing. What gets me is when people change things solely to break others nonstandard code.
          • by Blkdeath ( 530393 )

            Reading between the lines, it looks to me like Verisign want to start providing real time DNS updates, in which case there is a reason for change it. Currently they update the database twice a day, which is well within the limits of the current serial number scheme. But with real time updates, they could easily get to 100 updates in a day.

            I've always had a problem with change for the sake of change. The current system allows them, in their semantic "the SOA value must represent the date" methodology alr

      • Re:Stop Changing DNS (Score:5, Informative)

        by TubeSteak ( 669689 ) on Saturday January 10, 2004 @10:05AM (#7937195) Journal
        Yes, but software engineers have a knack for taking shortcuts where you least expect them. Kinda like MS and their broken implementation of standards. Even if you do code your html/etc properly, that doesn't guarantee it'll come out right. So the point being, just because you weren't supposed to, doesn't mean you didn't.

        The above isn't meant as an excuse, just an explanation as to why this will undoubtly break someone's something. Then you get back to the old 'change is good' but not if it causes trouble, then 'change is bad'[tm]. At some point we're going to have to make big changes to the infrastructure and things will break regardless of compatability. we might as well get used to it (though as always, having a decent explanation wouldn't be a bad thing[tm])

    • by Anonymous Coward
      Change is good. You don't even want to imagine how the internet would look today if things were still run the way they were 10 years ago. The users are changing, so the net will have to follow.
    • Re:Stop Changing DNS (Score:2, Informative)

      by Anonymous Coward
      Part of an older meaning for hacker was someone who fixes things that aren't broken. Verisign has hackers working on this one. We don't use YYYYMMDDHHSS for serials, we use an increasing serial maintained by a script that does not contain an overloaded date meaning. If you want the serial to be the number of seconds since beginning of an epoch, then change the RFC through normal means, not by some corporate edict. Hackers they are, in the old sense.
    • Get a grip man (Score:3, Insightful)

      by rs79 ( 71822 )
      The time/datestamp should have always been this way; more to the point do you know of any other TLD that at least attempts to be this communicative? They don't do this because ICANN, or anybody, makes them.

      How bout .NAME ("oops, we were rooted") or .PRO ("Hi ICANN, I know we said we wouldn't sell SLD name but we're dying here, and we ask a second time can we sell SLD name pleeeeeeeease?") or .biz ("home of more spam since 2000! Yeah baby!!") or any of the cctlds that have (cough) lame servers.

      Bitch at NSI
  • by netsharc ( 195805 ) on Saturday January 10, 2004 @09:22AM (#7937099)
    But because these zones are widely used and closely watched, we want to let the Internet community know about the changes in advance.

    The last sentence sounds like they want to emphasize that they're announcing this so early so the no one panics when all of a sudden something changes, I guess it's good that they're trying to rebuild trust.
  • And then they go and cite an example where there WOULD be an end user impact.

    Although unlikeley, there is a potential for collateral damage here. Is there anyone at Verisign willing to post the logic behind making the changes in the fist place? I can't see where there would be a business case when someone would jump up and say "We could make a billion dollars, but only if we change the way we determine DNS serial numbers for the .COM and .ORG domain. I guess we're screwed, guys!" Then the brave tech raises his hand and says "You know, with my Dell laptop and wireless LAN, I can change the way the serial number is incremented from anywhere."

    I've been watching too many Dell commercials lately...
    • by resiak ( 583703 ) <willNO@SPAMwillthompson.co.uk> on Saturday January 10, 2004 @09:34AM (#7937134)
      I'm not someone at Verisign, but I am willing to suggest possible logic in this change.

      The previous format, YYYYMMDDNN (where NN is an arbitrary sequence number), conforms to no standard but its own. The UNIX timestamp format is recognised by any date/time manipulation tool worth using, as well as being a standard (de facto or otherwise, I don't know). While switching format now is a PITA for those who have already written tools that work with it, it will make future development fractionally easier, as well as allowing more accuracy than could practically be used.

      Then again, they could just leave things alone.
      • by mec ( 14700 ) <mec@shout.net> on Saturday January 10, 2004 @10:39AM (#7937265) Journal
        I got your international standard right here [cam.ac.uk].

        YYYY-MM-DD and YYYYMMDD are both standards-compliant.

        Seriously, if you've never heard of this standard, read up. Whenever I need to stick a date or a time on something in text form, I just do it the ISO 8601 way.
        • Where in ISO-8601 does the NN fit in? It doesn't.
          • That's called a counter. You know -- integers starting from zero.
            I don't know if there's an international standard for counters,
            but there's certainly a /de facto/ standard for them.

            Lesson 2, "concatenation", to follow in my next post.

            YAW.
        • And if you want resolution smaller than a day? The NN tacked on to the end is kind of kludgy.

          The real question is why is Verisign prepping to increase the update cycle, and is this a good thing?

        • I'm all for ISO 8601, but it does not apply in this case. The serial number is not a textual representation of a date, it is a 32-bit unsigned integer in a DNS record that must be increased whenever the record is updated. A "YYYYMMDD" format, aside from resulting in a basically useless integer, would only change once per day. The UNIX timestamp format really does make the most sense here.
          • It'd better bl$$dy well not be a 32bit integer otherwise DNS is screwed in 2038...

            Luckily I know it isn't. Unfortunately I suspect the verisign way will break stuff unless they're careful eg.

            Today is:

            2004011001 in DNS time
            1073760813 in Unix time

            DNS time > Unix time... a lot of DNS systems (bind does this for example) will take the record with the largest number - there's scope for masses of confusion here.
            • DNS time > Unix time... a lot of DNS systems (bind does this for example) will take the record with the largest number

              But surely this applies only to the secondaries that transfer via AXFR? Most people deny general AXFRs and add explicit IPs of those who can, so they should know EXACTLY who needs to refresh the zones manually.

              A couple of years ago I switched two standard DNS clusters onto a third unix epoch based DNS with no problems.
            • It'd better bl$$dy well not be a 32bit integer otherwise DNS is screwed in 2038...

              The DNS spec specifically states that the value is to be compared using MOD 2**32 arithmetic. Besides, the serial number is only supposed to be used for DNS slaves to sync from the master, so it doesn't really matter.

        • I'd love to use that, though in the US people can't seem to grasp the concept...as easy as it actually is.
          • Metrics? For representing date information? Are you out of your fucking mind?

            Yes, we Americans have "our own" date and time system that isn't metric. (Same with the rest of the world, by the way, except for those idiots at Swatch [computeruser.com] who just want to sell more cheesy plastic watches.) So I guess we're a bunch of assholes who are too damn stupid to figure out metrics, right??

            Am I the only one here on Slashdot who's fed up with the knee-jerk America bashing???

              1. Metrics? For representing date information? Are you out of your fucking mind?
              Oh, sorry, I ment "Like the metric system".

              ...erm...HEY! That's the title of my post! Would you look at that!

              ...and, HEY! I'm an Amercian of the U-S-A veriety too! Wow! Amazing!

              • Oh, sorry, I ment "Like the metric system".

                If you take "metric system" to mean a system of measurement that's derived from base-10, then a system is either metric or not. It can't be "like metric". It either is or it isn't...

                ...and, HEY! I'm an Amercian of the U-S-A veriety too! Wow! Amazing!

                First, so what? There are plenty of Americans who engage in America-bashing. And second, you said with respect to metrics "in the US people can't seem to grasp the concept", in effect saying, "those dumb American

          • Here's the thing that I love:

            Both the U.S. and ISO standards, when truncated to just month and day, are identical. 12/11 or 12-11 is December 11th, whether you use the American or the international standard.

            So any confusion of what day of the year a two-separated-numbers date means is entirely the fault of European stubbornness in refusing to adopt the international standard.
        • by Anonymous Coward

          That standard is completely irrelevant. It specifies how to represent an unambiguous timestamp.

          DNS serial numbers are opaque tokens. There's nothing in the DNS specs. that requires them to be timestamps. All they have to do is increment by an arbitrary amount when the relevant records are updated.

          Quite frankly, I'm amazed anybody has bothered writing tools that pretend they are anything but opaque. It's like assuming certain values for an etag HTTP header or something.

        • Too bad the serial number is a 32 bit unsigned integer, not a string. For heaven sakes, this YYYYMMDDNN thing only makes sense if you look at that value in decimal representation.

          Anyway, the serial number is just a revision number intended for the DNS "system" (I'm being a little vague here) to know when a SOA record has changed. There are no end-user servicable parts inside. No human but the people directly handling the coonfiguration of that record needs to know about it - including how it is formed, if

      • I was under the impression that the *only* thing that the serial number stood for was a numeric sequence that the nameservers checked against to see if it had an older version of the record.

        I know of several people who use straight numeric serial numbers (i.e. '1', '2', '3') and haven't had any issues since they still increment it when they make changes on the master and the slaves all see the serial # is different and update.

    • Is there anyone at Verisign willing to post the logic behind making the changes in the fist place?

      RTFA...

      The .com and .net zones will still be generated twice per day, but this serial number format change is in preparation for potentially more frequent updates to these zones.

    • To be honest, this makes reasonable sense to me. I can see the case for Verisign wanting to make new registrations available immediately, rather than at the next 12-hourly update.

      Eventually, the zone data could be updated every time the contents of .COM or .ORG changed, with no real impact on the end user (because of DNS caching). The zone data could even be generated dynamically, directly from a database, with the serial set to the last time the database was updated. I know, historically, this isn't the w
      • The zone data could even be generated dynamically, directly from a database, with the serial set to the last time the database was updated.

        Check out Power DNS [powerdns.com]. Basically it's an authorative only nameserver that gets its results directly from a database (mySQL, Postgres, Oracle). Wanna update info for a zone, it's as simple as issuing an SQL UPDATE statement and viola, your changes are live.
    • One question I have, that wasn't answered in the FA: What format(s) are the other TLDs using for their DNS records?

      If the format is different across various TLDs, anyone programming for just the .com/.net format was foolish. (Too lazy to haul out the Cricket book and check.)

    • Why they are doing this: Versign used to do 2 updates per day, once every 12 hours. That means if you made any change that required new info in the .com zone, you were always waiting a long time before the changes actually happened. Verisign wanted to improve this, so they have developed a new system that they're going to roll out on Feb 9. The only visible effect of this change from the outside world is that the serial number format will change. So, in order to prevent paranoids from flipping out, the
  • Serial number format (Score:5, Informative)

    by albalbo ( 33890 ) on Saturday January 10, 2004 @09:28AM (#7937115) Homepage
    No-one cares what format the serial number is in, except those who have written software that relies on the current format (in disobedience of the RFCs...)

    A serial number is just a 32-bit number, and is used to see if a domain has been updated. The specs. do not say anywhere that it should be in a specific format.
    • by Trillan ( 597339 ) on Saturday January 10, 2004 @10:02AM (#7937187) Homepage Journal

      This looks like a good change to me. I can't imagine there would be an outcry over this if Verisign hadn't previously implemented the SiteFinder dung.

    • It's even less important than that. The only time the serial number is really used is when you are doing an IXFR from a secondary or mirroring DNS server, so that it can sync up to the master server by retrieving the updated zone data. Well guess what, Verisign runs the master and all the slave servers. This only affects Verisign machines, nothing else. I'm sure it's conceivable that someone at some point in time wrote some app that uses the serial number of the com and net zones (such as a company that
  • This announcement is important in that Verisign finally seems to recognize that they are part of a larger community, that those DNS records are not just some corporate asset sitting in a couple of computers in the corner.

    Changes affect administrators around the globe. As part of a community, they have a responsibility to make their decisions transparent to the community, and to announce changes well-enough in advance that those who are affected have time to prepare.

    This is not just a Verisign issue. The need for major Internet organizations to recognize the larger public as important stakeholders within the community is important. Awareness of the larger community should be followed by communication and actions that reflect that awareness, thus signalling a willingness to truly be a part of that community.

    Verisign seems to be exhibiting a newfound awareness of community that ICANN seems to have abandoned.

    I hope Verisign continues to be a good memeber of the community. Perhaps others can follow their lead.
    • A man goes to a bakers, asks how much the bagels are. Baker replies "twelve for a dollar". Man replies, "at the bakers across the street, they're fourteen per dollar". Baker replies, "Yes, but they're sold out, no?" "Yes", answers the man. "Well," says the baker, "when my bagels are sold out, they're sixteen for a dollar!"

      Moral: Verisign can hardly do anything wrong with this serial number change, so it's hardly proof of goodwill. When they stop messing with other, more delicate things, they will get
  • by nighty5 ( 615965 ) on Saturday January 10, 2004 @09:33AM (#7937128)
    The internet infrastructure should be managed and run by the community, and not driven by commerical proliferation of services offered to enhance a companies offerings. This change seems dubious at best, considering Verisigns previous efforts of domain sitting, which, would break applications lets ensure we keep them in their place.
    • by Anonymous Coward
      That is why the UN should have it.
    • History, I suppose.

      The internet infrastructure should be managed and run by the community, and not driven by commerical proliferation of services offered to enhance a companies offerings.

      That was what the recent UN conference was about I suppose. But everyone wanted to dismiss that as being useless.
    • The boxes have to sit on someone's desk. "The community," disorganized and disparate as it is, is remarkably poor at doing anything. You'd have to invent some sort of hierarchy. Maybe have a General Manager of the Internet, and he could have a board of directors under him or something. They would be elected by the nation's population at large, and they'd have the final say on internet issues.

      But it's be silly to give EVERYONE an equal vote in their elections, as the great majority of people have no clue ho
      • I can understand what you're trying to say.
        But...
        Not so long ago in the US only male land-owners could vote, because it was felt only they had a vested interest governance. This seems like a similar thing.

        I'm not sure an Republic of the Internet would really work. You'd create even more of a chasm between the technocrat and everyone else. I think people would be resentful of exclusion.

        Also, I have always been very wary of trying to "centralize" the Internet. Yes, I know ICANN is sort of.. something..
      • Yes, but your argument is fundamentally flawed in that VeriSign is a corporation not created to monitor and improve the internet, VeriSign is, like most corporations, created to generate profit for itself and improve its value for its shareholders.

        Remember, they were the ones who wanted to "commercialize" the root DNS servers and take them "out of the hands of the academics".
    • Great! I vote for a TTL of 444ms.
    • If all sysadmins want something they can make it happen. Remember when the internet filtered out that nasty verisign "helpfull search" yea, ISPs can fix the net and make big changes if they want to.

      In reality Verisign isn't in control of "too much" if it came down to it we could all just start up our own registry database (mirrored from the existing) and make a transperent change in how computers resolve domains (OS developers would conform to the new standard). But right now I think they're doing fine and
    • The internet infrastructure should be managed and run by the community,

      Sounds like a good idea to me. Why do you start going around, and telling the "community" that they need to start buying verisign stock... Pretty soon, the "community" will own and manage verisign.
  • Hey... (Score:5, Insightful)

    by Neophytus ( 642863 ) * on Saturday January 10, 2004 @09:33AM (#7937131)
    2038 anyone [deepsky.com]?
  • by Rosco P. Coltrane ( 209368 ) on Saturday January 10, 2004 @09:34AM (#7937136)
    Verisign will change the serial number format and "minimum" value in the .com and .net zones

    Right, so when I fall on an unresolved address, I can't even return it under warranty because the serial number has changed, and even if they did reimburse me, they changed the value. That's just flipping great...
  • They're changing stuff? They can't even keep my DNS and contact information correct. I can't wait till this "little" change is done so they have one more thing to fcuk up.
  • Is it just me? (Score:5, Interesting)

    by armando_wall ( 714879 ) on Saturday January 10, 2004 @10:01AM (#7937184) Homepage

    From Infoworld: But the company did allow that "processes that rely on the semantics of the .com/.net serial number" could be affected.

    For example, companies that have created scripts to monitor domain change on .com and .net will almost certainly need to make changes to account for the serial number change..."The damage won't be catastrophic, but some DNS servers could stop receiving updates,"

    And they are planning to do this next Feb 9? Isn't that like too little time for organizations to update their systems?

    I don't trust Verisign... the fact that they control such an important database accesed by millions of people around the world really frightens me. They screwed it once, they can do it again.

    They should have that power removed from them. It should be on another organization (i.e. a non-profit one) that better serves internet community.

  • I see a problem (Score:4, Informative)

    by jcochran ( 309950 ) on Saturday January 10, 2004 @10:15AM (#7937212)
    They will be changing their serial number from about 2004020900 to something about 1075680000 which according to the DNS system will be an older serial number because the difference is only 928340900 which is much less than half the range of a 32 bit number. They can make the change that they are planning if they make two changes with at least their cache interval amount of time between the changes. See RFC-1034.
    • Oh, don't worry. Everything will just sort itself out on the 3rd July 2033.
    • 2004020900 > 1075680000 from inspection, no need for maths. But less need not mean older, provided the semantics of the serial change are understood by all slaves. Verisign runs the slaves too, so it shouldn't be a major problem.

      If managed properly it should go smoothly. I'll be bottling up my angst for their less sane proposals.
    • Re:I see a problem (Score:5, Informative)

      by graf0z ( 464763 ) on Saturday January 10, 2004 @11:51AM (#7937579)
      There is no problem.

      Serial numbers only affect master-slave communication (and selfwritten scripts violating rfcs), but all masters and slaves for .com & .net belong to VS. See Paul Vixies reply [merit.edu] to the same question on NANUG.

      /graf0z.

  • Hmm... TTL900... (Score:2, Insightful)

    by Yaa 101 ( 664725 )
    With a TTL of 15 mins you have to generate a new zone 96 times a day to keep the zone visible during a whole day. I wonder if they want to speed up propogation time of new domain with this?

    • speed up propogation time of new domain with this
      And the propogation time of erronous records as well.
    • Re:Hmm... TTL900... (Score:4, Informative)

      by KevinM ( 45416 ) on Saturday January 10, 2004 @02:16PM (#7938377)
      You clearly don't understand how DNS works. This change in no way requires a new zone 96 times a day. The TTL field is used by client accessing the zone to understand when they need to stop caching the retrieved data. Verisign could have a TTL of 15 minutes and never change the serial number, and nothing would break.
  • Y2038 bug? (Score:2, Interesting)

    by AndroidCat ( 229562 )
    Doesn't the UNIX 'seconds since 1/1/1970' break in 2038 or so? I could be wrong. It's hard to remember all the various time/date glitch dates.
    • Re:Y2038 bug? (Score:1, Informative)

      by Anonymous Coward
      Thirty-two (32; I'm supposed to always write out numbers at the beginning of sentences according to an English style guide -- I'm trying to make Slashdot educational or something, heh) bit signed "seconds since 1/1/1970" break in 2038, yes. Sixty-four (64) bit signed "seconds since 1/1/1970" have a really really long time before they break. By 2038 we (define we to whatever you want) will have had ample time to switch to 64 bit values and/or platforms (if POSIX doesn't interfere, it can be done on native 32
    • Yes, it'll break a 32 bit counter which will wrap around in 2038.

      If you're still using a 32 bit computer in 2038 you may as well right now, begin walking around with your thumb and finger making the shape of an L on your forehead.

      • Re:Evo;ve or die (Score:3, Informative)

        by pe1chl ( 90186 )
        It does not matter how many bits your computer has, it matters if the DNS protocol is still in use by then.

        If it is, it will break because of this change. The older timestamp format had a much longer lifetime.

        Of course there will be major problems in 2038, probably much worse than in 2000. This small issue will not contribute too much.
      • We are still using 8 bits computers in many devices. Btw a date format is independant of the cpu data and adresse bus bits size.

        I am pretty sure we will still widely use 32 bits computers in many devices in 2038. Many devices will have IPv6 address, hostname and timestamps.
      • The DNS protocol uses 32 bits for serial number. So what you should be saying is that we should be upgrading the DNS protocol before 2038, with enough lead time so all the network operators will have time to make the switch. That means we need to expand the DNS protocol to more than 32 bits (40 bits should actually be enough) by around 2008.

      • Said the programmers back in the 70's..
        "If you're still using these computers in 2000 you mas as well right now, begin walking around with your thumb and finger making the shape of an L on your forehead."
        Guess what happened?
    • You are forgetting they run the slave servers as well.

      -dk

  • by swb ( 14022 ) on Saturday January 10, 2004 @11:01AM (#7937349)
    "Also, companies that have incorrectly formatted their DNS servers to get information directly from the DNS root servers maintained by VeriSign will stop receiving updates on Feb. 9, leaving those servers and the Internet users who rely on them out of step with the rest of the Internet, he said."
    I so seldom read even the tech press because of this kind of statement. What does it mean? AFAIK the root servers just have NS records pointing to the 2nd level domains, but querying the root servers is how you find them and this is essentially how DNS is *supposed* to work. There was no further context in the story to indicate what they're talking about.

    Are there other queryable DNS servers maintained just by verisign for .com and .net for distribution to the usual root servers? Or have I been running DNS wrong all along?
    • Or have I been running DNS wrong all along?
      Unless "you" are an ISP (and probably a major one), then yes you have. DNS is a hierarchical system, and you should be quering against the nearest level to you. Unless you are a major ISP, the root servers aren't the nearest ones - your ISP's DNS servers are. It's like totally p2p man.
      • Umm, I'm not asking the roots for recursive queries (ie, requesting an A record for www.google.com). I ask *my* DNS server for that, it gets the NS for google.com from the root servers, and then it asks ns.google.com for the A record for www.google.com, and then returns it to me. The roots just provide references to the NS records for .com and .net 2nd level domains, which, as I said previously is how its supposed to work.

        Querying my ISPs DNS would accomplish the same thing in the same way, except they m
        • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Saturday January 10, 2004 @12:25PM (#7937746) Homepage Journal
          Were you serious or joking? I hope you were joking. You were, right?

          Because if you weren't, you would be saying that if your ISP has 10,000 customers, and they all ran their own caching nameservers, and all of them decide to resolve "www.google.com", then the root nameservers wouldn't really be hit with 10,000 times as many queries as if all of your little servers were properly configured.

          There are two reasons to query the root nameservers directly:

          1. Your ISP's nameservers are broken.
          2. Testing.

          That's it. Hitting them directly for routine queries is wasteful, inconsiderate, and expensive. If you weren't joking: fix your configuration. Now.

          • Third reason (Score:2, Informative)

            by Skapare ( 16644 )

            Third reason:

            3. Your caching nameservers just flushed cache or restarted, and thus they have no idea where any of the top level domains are, and have to ask the root servers (provided in the hints file) where they are. Also, this will happen again in 2 days when those NS records, and their corresponding A records, expire from the cache.
          • So who decides who gets to query the roots for NS queries? My ISP is kind of small, only a few thousand customers -- should they be configuring THEIR name servers to foward to nameservers at their upstreams? Since their upstreams are major Tier 1 providers like UUNet, Qwest and Sprint, presumably my ISPs nameservers are the cause of untold THOUSANDS of unncessary queries against the root nameservers that could easily be satisfied by the caches at UUNet, Qwest and Sprint.

            I don't plan on changing my config
            • So who decides who gets to query the roots for NS queries? My ISP is kind of small, only a few thousand customers -- should they be configuring THEIR name servers to foward to nameservers at their upstreams?

              In a word: yes.

              Since their upstreams are major Tier 1 providers like UUNet, Qwest and Sprint, presumably my ISPs nameservers are the cause of untold THOUSANDS of unncessary queries against the root nameservers that could easily be satisfied by the caches at UUNet, Qwest and Sprint.

              If your ISP is w

              • Can you show me a map of the DNS caching hierarchy? I didn't realize there was one.

                Can you tell me why if this is so damaging to the 'net why Verisign or the root server operators don't block NS queries to the root servers?
              • by jroysdon ( 201893 ) on Saturday January 10, 2004 @04:56PM (#7939584) Homepage
                If your ISP is well-managed, then they query their upstreams and not the root nameservers.

                That's simply not true. Customers should use their ISP's DNS server, but I don't believe ISP's should ever be forwarding queries upstream. That's just asking for problems. ISP's buy wholesale bandwidth, not services like mail forwarding or DNS forwarding (not that one couldn't do it, but it is asking for an extra level of troubleshooting and delay).

                Once a lookup to the .NET NS is cached from the root servers, it is cached the same for a Tier 1 ISP or a Tier 2, and it doesn't have to be done again. The root nameservers are able to handle the .NET, .COM, .US, etc. lookups just fine. Even the next-level .NET, .COM, .US nameservers are multi-homed and anycast globally and able to handle a huge load. There is no reason to risk problems with an upstream ISP vs. going right to the source for an NS record lookup. Once the NS info is cached for a TLD like msn.com, it's the msn.com NS servers (and the hundreds of thousands (?) of other TLD NS servers) that can each handle their own load just fine.

                It's all meant to scale without having needless delay or problems introduced by forwarding queries to a DNS server you cannot control.

                Perhaps you can point to an RFC that says Teir2/3 ISPs should forwad DNS queries to upstream providers? Nope, thought not, not even a best practice.
          • Except if your ISP is PacBell/SBC. Their DNS servers are constantly having problems. I've always maintained my own DNS server on my ADSL account and I query it and two DNS servers I maintain at my office instead of PacBell's DNS servers.
          • Hitting them directly for routine queries is wasteful, inconsiderate, and expensive.

            But could you not also say that running your own cache at the end of a leased line is better than everyone in your network querying your ISP to resolve every request?

            I'd say it's cacheing and reasonable TTLs that contribute most to reducing the load on the DNS. But I've met DNS administrators who didn't have a clue about TTLs, setting them to "300" to make sure data that had not changed in years would always be "fresh".
            • But could you not also say that running your own cache at the end of a leased line is better than everyone in your network querying your ISP to resolve every request?

              Absolutely correct. In BIND, you configure your ISP's DNS in the "forwarders" option and point all of the machines on your LAN to your local server. It answers any requests it can, forwards everything else to your ISP, and then tries to resolve any requests that your ISP can't manage (if there server's down, for example). The key differenc

  • by rabtech ( 223758 ) on Saturday January 10, 2004 @12:32PM (#7937778) Homepage
    It appears that they are gearing up to start providing far more than two updates per day. This could mean that sometime in the future you could register a new domain name and have it up and running within 15-30 minutes.

    Seems like a positive change to me.
    • by MCZapf ( 218870 ) on Saturday January 10, 2004 @03:33PM (#7938920)
      Who on earth needs a domain name working so quickly? Spammers, perhaps. Squatters. Anyone else?
      • Who on earth needs a domain name working so quickly?

        As always, you don't see people that need it because nobody can do it yet.

        Perhaps it will become common to go to a registrar, and buy a domain on the spot for a single-day event, or something similar.

        Perhaps people that are switching site ownership don't want to wait a week for anyone to get to the new site.

        Or even more likely, perhaps companies want this as some sort of load-balancing/failover mechanism... It's not instant, but 15 minutes of excess l

  • by Skapare ( 16644 ) on Saturday January 10, 2004 @12:57PM (#7937907) Homepage

    My serial number format lasts longer than Verisign's, and I still get more than 100 updates a day out of it. In fact it will last until 07:06:36 Tuesday 2 October 2096 while staying in just 9 digits (which it has been since 15:06:40 Saturday 4 September 1982). After that it goes to 10 digits, but still remains a positive signed 32 bit integer until 12:56:28 Wednesday 16 March 2242, and if unsigned 32 bit integer works everywhere else, it will go all the way to 01:53:00 Wednesday 30 May 2514.

    Instead of being the count of number of seconds, as Verisign plans to use, mine is 1/4 of that value. Basically, I take the system time() value and divide by 4. By treating that value as an unsigned quantity, I won't have the Y2038 bug, either. That logic will work until 06:28:15 Sunday 7 February 2106 (past the 9 digit limit). And I can do 21600 updates a day (one every 4 seconds).

    dig linuxhomepage.com. soa

    • I haven't checked your math, but it seems right. However, why are you doing that instead of, say, setting your algorithm to "serial += 1"? Then you'd be constrained to making 2^32 updates before wrapping around, not 2^36 seconds (not that yours is such a small limit :) ).
      • That could be done. But rather than have to process it like that and either store or parse the serial value to be updated, the way I do it to take the master file (not a normal zone format) I use to generate each zone, get its last modification date from the filesystem, and produce the serial number from that. This way, I can still derive the date and time, to a 4 second resolution, from the serial number, and back track it to the archived master files if there are any issues to figure out. It's basicall

        • Two words: CVS, Subversion. I used the former for configuration management in the past, and I'm now experimenting with the latter. It has all of the advantages of what you describe, except that someone else has already troubleshot (troubleshooted?) it, and it does a lot more for free. You might find either of those useful for your setup.

And on the seventh day, He exited from append mode.

Working...