Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Announcements The Internet

Faster Updates for DNS Root Servers Arrive 150

Tee Emm writes "VeriSign's DNS Rapid Update notice period (as announced on NANOG mailing list) expires today. Beginning September 9, 2004 the SOA records of the .com and .net zones will be updated every 5 minutes instead of twice a day. The format of the serial number is also changing from the current YYYYMMDDNN to a new one that depicts the UTC time." We first mentioned this back in July, but it's finally launching now.
This discussion has been archived. No new comments can be posted.

Faster Updates for DNS Root Servers Arrive

Comments Filter:
  • dynamic dns (Score:5, Interesting)

    by Anonymous Coward on Thursday September 09, 2004 @08:11AM (#10199261)
    So when will they be added support for dynamic IP addresses a la dyndns etc. That would be great.
    • Re:dynamic dns (Score:5, Informative)

      by numbski ( 515011 ) * <numbski&hksilver,net> on Thursday September 09, 2004 @08:28AM (#10199336) Homepage Journal
      It's already there [wieers.com].

      The catch of course is that you have to be running bind locally to make it work. Which is fine if you're a unix-head and know how to work dns, but for the average joe, it's far from simple. I have a perl script that checks my Linksys firewall's IP every half hour, and if it's changed, updates the dns file, then runs nsupdate.
      • Re:dynamic dns (Score:3, Interesting)

        by BenFranske ( 646563 )
        That solution is not really as nice as DynDNS. I for one would really like to see a piece of OSS that lets you operate using the (documented) DynDNS protocol so that the standard update scripts widely availible for that would work. Running a nameserver on a system that doesn't require one seems counterproductive. Plus, you could use existing software to keep Windows boxes up to date as well. The DynDNS update protocol is availible here [dyndns.org]
      • by robertjw ( 728654 ) on Thursday September 09, 2004 @10:50AM (#10200642) Homepage
        Which is fine if you're a unix-head and know how to work dns

        I don't think anyone actually knows how to work dns. It's one of those magic things that you hack for a couple hundred hours and it finally does what you want it to - like qmail.
        • I say the same thing about Sendmail during interviews.

          "So, Mr. Smith, it says here that you know Sendmail?"

          "Well, I'm not virgin to it. I'm comfortable with Sendmail, but anything that requires a 1200 page book is a topic nobody REALLY knows."
          • I say the same thing about Sendmail during interviews.

            I use a similar process when interviewing. Our company hands prospects one of those "Rate yourself from 1 to 10 forms on these 20 topics" forms. Anyone who rates themselves a 10 on ANYTHING gets a mental 7, because clearly they don't even know what they don't know. Somebody with the brains to rate themselves a 9 is very likely an expert, though some quizzing is needed to make sure, though a cross check on the number of 8-10's can usually tell you; nobo

        • I don't think anyone actually knows how to work dns./I.

          I do.

          The root servers serve up the root zone, which is pointers to tld servers. I don't really think you want them doing dynamic dyn-dns. You were thinking of the com/net servers perhaps?

          • by rs79 ( 71822 )
            I responded to the title of this thread, which is incorrect, instead of the article. com and net servers are TLD servers, not ROOT servers.

            I still claim to understand DNS even though at times I simply cannot read.
      • To be fair, you don't need to be running BIND locally. You can also be using Windows 2000 for a DHCP and DNS server and get local dynamic DNS updates. It helps to use Active Directory as well. While for most people this isn't going to end up being all that much easier than rolling it up with Linux, it IS easier, and it IS a possibility. Of course, paying for 2k Server is kind of a stumbling block for most people, even those who have a second machine upon which they could be running BIND. And of course, you
        • That, or on your windows workstation load cygwin and bind.

          The part that sucks is that last time I tried installing cygwin, it was incredibly difficult, this coming from someone that manages many FreeBSD text-only servers, and uses many different flavors of linux. Perhaps cygwin has improved?
          • Cygwin is stunningly easy to install now. The only caveat is remembering to shut off all cygwin services and shells when updating cygwin, or you'll have DLL problems. However, this doesn't solve the problem that BIND is the hard part. Hell, IMO setting up a whole base linux system is easier than setting up BIND and ISC DHCP, even if that system is running gentoo, and especially if you're trying to get ddns working. Mind you, I have all that stuff working on my gentoo system, but I had to piece it together f
            • DHCP isn't really that hard, but the documentation that comes with the release is not very good for mere mortals. I've had a lot better luck going online and finding an example config file and just editing that to do what I need. Especially if you aren't trying to be really fancy with your DHCP, this works great.
              • Really it's more about BIND than DHCP. DHCP was easy to set up, as you say, by hacking someone else's config file. BIND, on the other hand, is hard to grok that way. I've been using it for a long while so I don't have too much trouble, but I set it up so infrequently that I still have to RTFM every time.
      • I don't know about the Linksys, but all of the DrayTek DSL routers I've used have an inbuilt dyndns client (they also have clients for no-ip and all the other popular DNS providers). It grabs your dynamic IP address as soon as a connection is made, and pings it up to your DynDNS account. Works like a charm.

        I wouldn't be surprised if this is included in pretty much every router these days.
    • Re:dynamic dns (Score:5, Informative)

      by two-tail ( 803696 ) on Thursday September 09, 2004 @08:44AM (#10199410)
      Services provided by the likes of DynDNS are not affected by this. The changes mentioned in this article affect top-level servers, which maintain lists of registered domains and their name servers. Providing an actual IP address is provided in the next level down. For example, here is the complete path that you would go through to get an IP address for www.slashdot.org:

      1: a.root-servers.net (refers request to tld2.ultradns.net)
      2: tld2.ultradns.net (refers request to ns1.osdn.com)
      3: ns1.osdn.com (returns 66.35.250.150)

      Adding and deleting domains causes changes at #1 and #2. Changing the name servers assigned to a domain also happens at #1 and #2. Changes to an IP address (like the IP address for www.slashdot.org), which is what DynDNS and the like covers, would take place at #3.

      One last note: If you have a domain already in place, and you want to change its nameservers over to DynDNS (possibly to take advantage of their dynamic update service), then #1 and #2 would get involved (since you're changing a nameserver). Under the system being phased out, that would have given you a day-long delay.
      • Re:dynamic dns (Score:3, Insightful)

        Not quite - this would theoretically allow you to now also host your DNS zone on a system with a dynamic IP, as you can now get a change to the root-level NS records in short order.

        I sure wouldn't want to try that, though....
      • Re:dynamic dns (Score:3, Informative)

        by tigress ( 48157 )
        Actually, adding, deleting and changing domains causes changes at #2 and #3. The root-servers are never affected unless there is a change in the TLD delegations. Changing a Second level domain requires changes in the TLD nameservers (#2) and the nameservers responsible for the SLD (#3). Changes within the domain only affects #3. Unless, of course, the change is on an authorative nameserver, in which case #2 is also affected. This article describes how the changes in #2 will take effect faster.
      • I really don't think you would want to put much on a DynDNS network. After all, everything related to SMTP is automatically /dev/nulled by just about everyone in the world.

        And to think, since they've done that, spam and viral infections on the internet have pretty much continue at pace.

        I'm a disgruntled DynDNS user who can't send any email from my servers, even though they are legit.

    • by JJahn ( 657100 )
      Might I recommend using IPCop on an old PC as a firewall/NAT device for your home network? It contains the ability to automatically update your IP address to dyndns and several other dynamic services. Its also a nice firewall product, which is free (as in beer and speech).
  • by two-tail ( 803696 ) on Thursday September 09, 2004 @08:12AM (#10199268)
    I remember hearing about this, but I don't remember exactly: Is this available to all registrars, or is there something that needed to be done on their end to get their updates in quickly?
    • Looks to me like it requires a conformity to the new serial number spec (which, if I might say BLOWS...I run an ISP and I appreciate being able to look at a DB file and know when the last time I changed it was by simply looking at the serial...ugh), otherwise it will just sort of 'happen'. So long as your dns server is authoritative for a domain and your root-hints file is correct.

      Anyone have further input?

      • AFAIK the serial number has only ever been in the format of YYYYMMDDNN as a reccomendation. There is nothing in the spec preventing you from numbering versions from 1.

        Changing to a UTC timestamp in seconds is no big issue, but for conformity, it's nice if everyone does the same thing, or at least knows what everyone else is doing, especially if you have some software trying to make sense of it all.
      • $ perl -e 'print scalar localtime '

        While it may suck b/c you might have to change some workflow stuff at your ISP, it shouldn't be too difficult to write a script that produces a readable log of DNS changes.

    • this is a change to the com and net nameservers. It has nothing to do with the domain name registration process, other than that such registrations (or changes to existing domains) will make it into the com and net nameservers faster. Assuming that your registrar doesn't dawdle, that is...
  • as I understand it, this would allow for propogation of new domains to be completed faster. this is *theoretically* a good thing, but it means that applications cannot cache DNS as effectively for nonexistant domains. this may end up causing a *lot* heavier load on the root DNS servers. much as we'd all love that functionality (who doesn't want to see their new domain a few minutes after they buy it?), there was a reason why they designed it the way they did.
    • It's not very good thing. At least compliant DNS implementations will be doing 144x as much traffic with them as before (assuming infinite load; of course, in practise they will have bit less load).

      I don't see the point myself, domains are not supposed to change every minute anyway.
      • by LiquidCoooled ( 634315 ) on Thursday September 09, 2004 @08:34AM (#10199365) Homepage Journal
        If I remember rightly, the new system does not change the TTL, it is still down to the domain administrator to pre plan domain moves.

        On the day before you move, your TTL can be dropped to this 5 minutes so your address can be changes with minimal disruption. After the move, once your stable, your TTL can be increased once again, and network congestion is minimalised.

        Of course, I could be talking out of my arse, one of you lot will put me right if this is the case.
      • by Entrope ( 68843 ) on Thursday September 09, 2004 @08:42AM (#10199401) Homepage
        Your claim of "144x as much traffic" exhibits an ignorance of how DNS caching works -- not that I should be surprised by the ignorance of anything I read on Slashdot. Specifically, caching is controllable independently of zone revision. It is easy to instruct clients to cache negative replies for a longer time than that revision of the zone is current. The only way to increase the frequency of lame requests is to reduce the TTL or SOA MINIMUM values.

        On top of that, maximum-frequency error responses are only a problem when you have enough headstrong or automated users to see requests for the SAME misspelled domain name just past the SOA MINIMUM (or TTL, if appropriate) time. It is not a problem for valid name requests, since they have separate TTLs. While that frequency of lame requests is indeed a valid assumption with infinite load, in practice, only the largest ISPs will see anything that approximates that traffic.

        Your comment that domains are not supposed to change every minute is correct for some domains; but the particular domains in question (TLDs) do change every minute as new domains are registered or expire. (Other things, like DHCP-driven dynamic DNS, can also legitimately cause frequent zone updates.)
    • Why can they not cache it same as always? You do a lookup on a domain at X, you can keep it cached for X + however long you wish.
    • by ewithrow ( 409712 ) on Thursday September 09, 2004 @08:19AM (#10199300) Homepage
      DNS was designed in the lat 70's, with RFC's appearing in the early 80's. The computational power today is vastly greater than what the routers of the 80's could contend with. I'm sure they would not implement this change if they had not thoroughly outweighed the costs and benefits.

      Oh wait, VeriSign? We're all doomed.
    • yes, because 20 years ago computeres were slow pieces of shit.
    • by LostCluster ( 625375 ) * on Thursday September 09, 2004 @08:24AM (#10199321)
      This will be a Good Thing(TM) if the DNS root servers can handle the load. Of course, if they can't it'll have to go in the Bad Idea(TM) file.

      The key thing comes down to if we can trust VeriSign to be doing their homework correctly. VeriSign's a very funny company to think about because their entire product line is based on encryption and ID services that define VeriSign as a root of trust... if you don't trust VeriSign to be an honest actor, practically everything they do becomes worthless.

      It's so hard to get trust-based systems to work these days...
      • Geeze. Why is everyone talking about the "root servers?" This isn't . (root zone), this is com. and net.! The two are not the same thing!
        • you're correct. this is .com and .net

          as a side note. for everyone that doesn't pay attention to dns and have been spouting random crap (eg, not you). .org has been doing the exact same time since ultradns bought the rights to host it. in short, no it won't cause much problems

          also, verisign pointed out the TTL will stay to 24 (or 48?) hours, so this really only affects NEW domains, unless you set your zone ttl to much lower in the first place (your zone has a higher trust than verisign's, so the ttl from t
    • by Mordac the Preventer ( 36096 ) on Thursday September 09, 2004 @08:27AM (#10199333) Homepage
      This is *theoretically* a good thing, but it means that applications cannot cache DNS as effectively for nonexistant domains. this may end up causing a *lot* heavier load on the root DNS servers.
      No, it's the TTL that determines how long a record can be cached for. Updating the zone more frequently just means that the information will be available sooner. It will not increase the load on the root nameservers.
    • Nobody said the applications have to update every five minutes. They can still update infrequently, for the same quality of service (and cost) as before. Or am I missing something?
    • this may end up causing a *lot* heavier load on the root DNS servers.

      Maybe the guys at bittorrent should start a rogue P2P DNS serving system. If it worked well enough, it would become a defacto standard.
    • by SirCyn ( 694031 ) on Thursday September 09, 2004 @09:26AM (#10199704) Journal
      Let me clarify a few misconceptions.

      1. The "minimum time" set to 15 minutes means the servers will not check for an update on a record until it is at least 15 minutes old.

      2. The 5 minute transfers. This is how often the root servers check with each other. This has nothing to do with any other server. Not the registars, not your ISP's DNS server; only the root servers.

      3a. The serial change from yyyymmddnn to Unix epoch time makes perfect senese. And no, it does not suffer the 32-bit problem. Serial numbers can be much more than 32 bits. Heck the yyyymmddnn takes 8 bits per character now, so 80 bits just for that. Dare I guess how far into the future an 80-bit Unix time would go (if it was stored that way)?

      3b. If this serial change screws up your DNS Cache server simply flush the cache, problem solved. If you have some application (as suggested in the memo) that relies on the serial you need to update your software, now.

      4. Whoever suggested this as a backup plan for having only one server run your whole opperation: You are dumb. Now go away or I shall taunt you a second time.

      5. The TTL for a standard DNS entry is not going to change. So if your ISP's DNS server caches an entry it will (probably) keep it the same amount of time as it did before. (I say probably because most DNS severs can update records before their TTL expires).

      Would the people who do not know how DNS works please stop posting your misinformation and speculations. Thanky you!
      • by Kishar ( 83244 ) on Thursday September 09, 2004 @10:18AM (#10200178)
        3a. The serial change from yyyymmddnn to Unix epoch time makes perfect senese. And no, it does not suffer the 32-bit problem. Serial numbers can be much more than 32 bits. Heck the yyyymmddnn takes 8 bits per character now, so 80 bits just for that. Dare I guess how far into the future an 80-bit Unix time would go (if it was stored that way)?


        You're correct on all counts except this one.

        From RFC1035:

        SERIAL The unsigned 32 bit version number of the original copy of the zone. Zone transfers preserve this value. This value wraps and should be compared using sequence space arithmetic.


        The YYYYMMDDxx way can't be used past 2148, the UTC way can't be used past 2038. (neither way breaks it, because the serial number wraps to 0)
      • 3a. The serial change from yyyymmddnn to Unix epoch time makes perfect senese. And no, it does not suffer the 32-bit problem. Serial numbers can be much more than 32 bits. Heck the yyyymmddnn takes 8 bits per character now, so 80 bits just for that. Dare I guess how far into the future an 80-bit Unix time would go (if it was stored that way)?

        Doesn't Unix time wrap around some time in 2035? I think the kernel stores time since the Epoch at least in milliseconds, if not nanoseconds...

  • Fantastic. (Score:4, Funny)

    by John_Allen_Mohammed ( 811050 ) on Thursday September 09, 2004 @08:15AM (#10199283)
    This will probably help speed things up on the ogg-streams-over-dns p2p radio stations. Some complain that DNS wasn't designed for these purposes but generally, the same people complaining are the ones raising kids now, using viagra and getting ready to wear diapers again.

    Technology adapts to changing circumstances and trends, old folks do not.
  • Why? (Score:1, Insightful)

    by tuxter ( 809927 )
    Is there any real need for this? Realistically it is going to have very little impact on the average user.
    • Re:Why? (Score:4, Informative)

      by mr_z_beeblebrox ( 591077 ) on Thursday September 09, 2004 @08:54AM (#10199461) Journal
      Is there any real need for this? Realistically it is going to have very little impact on the average user.

      This will affect DNS customers not consumers. DNS is a purchased service (not a product) Businesses are its customers, users are its' consumers. Verisign wants to make a positive impact on its' customers to turn more revenue.
    • It certainly will make my life easier. It'll save me a lot of hassle waiting for a new domain to come up so my clients can be happy. I register a lot of domain names and overall people like to see their domain as soon as they've registered it. Registering and waiting is annoying and a hassle when you're trying to jump right in. It's especially important in cases where an existing website needs to change, or add, a domain name. It might seem a moot issue but I see it as a frequent annoyance.

      For the average
  • by Anonymous Coward on Thursday September 09, 2004 @08:16AM (#10199290)
    Slashdot has announced they will begin posting stories every twenty seconds, instead of every hour.

    Says CowBoy Neil, "Well, we figured at the increased rate, we could dupe stories at twice the usual rate. And also... uh... we could use my name in twice as many polls."

    Reached for comment in his mother`s basement, Commander Taco said only, "DNS, smenesh, I think we all want to see GNNA update their trolls!"
  • Root Servers... (Score:5, Interesting)

    by jmcmunn ( 307798 ) on Thursday September 09, 2004 @08:16AM (#10199291)

    So I don't exactly get it, but is this just the root servers that are going to be updating every five minutes? I read the links, but it still doesn't seem clear to me. I mean, if my registrar (or dns service or whatever) still only send in their updates once every day, this won't really help me as much right?

    Of course, once they do send it in I will still get it updated an average of 6 hours faster I guess. Just curious, since the details were a little vague to us non-dns folks.
    • Yes, but most registrars update live.
    • Re:Root Servers... (Score:5, Informative)

      by jabley ( 100482 ) on Thursday September 09, 2004 @10:22AM (#10200228) Homepage

      This has nothing to do with the root servers [root-servers.org]. The slashdot article is inaccurate.

      Verisign are publishing delegations in the DNS from their registry for the COM and NET domains much more frequently than they were before. The TTL on records in the COM and NET zones is not changed.

      The affected nameservers are a.gtld-servers.net through m.gtld-servers.net. These are not root servers. They are authority servers for the COM and NET zones.

      Verisign also runs two root servers (a.root-servers.net and j.root-servers.net). There has been no announced change in the way A and J are being run.

  • Speed up attacks? (Score:3, Interesting)

    by two-tail ( 803696 ) on Thursday September 09, 2004 @08:16AM (#10199292)
    Would this make it easier to slip false transfers through whatever nets may exist to catch them (as in this news byte [theregister.co.uk])? I guess false transfers such as this would be noticed by the public at large sooner, so that's not too bad.
    • True, but I don't see how the DNS system's delay-created waiting period protected much from fraudulent transfers of domains. Afterall, you wouldn't know a false transfer took place until your DNS server got the bad news too...
  • Emergency use (Score:1, Insightful)

    by pubjames ( 468013 )

    This is great use for emergencies. You can have a backup web server configured identically to the main one. If the first web server goes down, just update the IP address in the domain record and your back on-line in five minutes.

    Good for those of us which host web sites for clients.
    • Re:Emergency use (Score:2, Informative)

      by Anonymous Coward
      you can already do this, the root servers basically just know the address of a nameserver designated to a domain.

      this just helps if you want to switch nameservers within 5 mins

      on top of that if you have a standby box bring it online with the old ip
    • Re:Emergency use (Score:5, Informative)

      by autocracy ( 192714 ) <slashdot2007@sto ... .com minus berry> on Thursday September 09, 2004 @08:37AM (#10199385) Homepage
      Wrong way about it. Your DNS records in the [.com .net .org .whatever] domain only point to your NS records. You should have multiple name servers up anyway (peering agreements for DNS are usually pretty easy to get). It is your A records that point to the web server, and the update for that takes place upon your own servers.

    • Not only that, but you can have them with completely different hosts, even in different countries.

      I've seen big businesses who have lost their web sites for days because of the hurricane...
    • Re:Emergency use (Score:3, Informative)

      by Eggplant62 ( 120514 )
      I think you mean that this would be more handy for sites who lose a DNS server. Note that if the machine in an NS record for a domain goes dead, the domain can be left unresolvable until the root servers update. Now with every five second updates on the root servers, change the NS records and yer back up and running.

      Happened to me with my vanity domain when afraid.org was cut off for about 8 hours due to abuse issues. His upstream provider cut him off due to spammers hosting DNS there and he had to take
    • Re:Emergency use (Score:5, Informative)

      by LostCluster ( 625375 ) * on Thursday September 09, 2004 @08:38AM (#10199390)
      What's the point in that?

      The record in a DNS root server never is meant to identify your web server, it's meant to indentify your primary and secondary DNS server, and it's those servers that work for you (or at least the ISP you work with) to identify your web server.

      So, if you want fallover if your main web server goes down, you just need to update your local DNS record, not the one at the root servers. It's when your DNS servers explode that the five-minute updates would be helpful.
    • Re:Emergency use (Score:5, Informative)

      by ostiguy ( 63618 ) on Thursday September 09, 2004 @08:47AM (#10199423)
      This isn't that. You are talking about regular DNS A record changes on your dns server. You could have done what you sought a year ago, or 10. This is about what DNS servers are responsible for your domain, among other domain level changes (responsibility, etc) - if Chicago burns to the ground, Schlotsky's House of Bacon, having lost their headquarters with its server room, could then outsource its DNS, enter records, and make a root change to indicate that schlotskyshouseofbacon.com's dns servers have changed within 5 minutes (ideally).

      ostiguy
    • No, the way to do that is to have a DNS server with small TTL (time to live) to switch IPs. Some cheap DNS Services [dnsmadeeasy.com] allow you so set TTL, or you can run your own.
    • "We will be bringing most of the web down for maintenance starting in about 5 minutes."
  • Cool.... (Score:5, Insightful)

    by Eggplant62 ( 120514 ) on Thursday September 09, 2004 @08:33AM (#10199364)
    Now spammers can rotate through domains faster than ever before!!
    • This has no effect (Score:5, Insightful)

      by warrax_666 ( 144623 ) on Thursday September 09, 2004 @08:54AM (#10199459)
      on how many domains a spammer can register over time -- for much the same reason that you can still have huge bandwidth even if your latency is crap. It's just a question of reducing the initial delay from registration to activation.
      • This has no effect on how many domains a spammer can register over time -- for much the same reason that you can still have huge bandwidth even if your latency is crap. It's just a question of reducing the initial delay from registration to activation.

        No, but it certainly allows them to now rotate nameservers for their domains quickly. Imagine where they've got a number of nameservers for their domains setup, and in order to make it more difficult to determine where the nameservers are hosted, they bou

  • What effect will this have on DNS hijacking and similar hacking methods which utilize DNS? Will it be easier as things get more 'rapid'?
    • If it does, I would imagine that it would also make it easier to change *back* rapidly. You'd likely also notice sooner- the servers would change within 5 minutes instead of half a day later. Good luck getting the bureaucracy to recognize your complaint, however...
  • by bruceg ( 14365 ) on Thursday September 09, 2004 @08:39AM (#10199394) Homepage
    Upcoming change to SOA values in .com and .net zones

    * From: Matt Larson
    * Date: Wed Jan 07 17:49:43 2004

    VeriSign Naming and Directory Services will change the serial number
    format and "minimum" value in the .com and .net zones' SOA records on
    or shortly after 9 February 2004.

    The current serial number format is YYYYMMDDNN. (The zones are
    generated twice per day, so NN is usually either 00 or 01.) The new
    format will be the UTC time at the moment of zone generation encoded
    as the number of seconds since the UNIX epoch. (00:00:00 GMT, 1
    January 1970.) For example, a zone published on 9 February 2004 might
    have serial number "1076370400". The .com and .net zones will still
    be generated twice per day, but this serial number format change is in
    preparation for potentially more frequent updates to these zones.

    This Perl invocation converts a new-format serial number into a
    meaningful date:

    $ perl -e 'print scalar localtime 1076370400'

    At the same time, we will also change the "minimum" value in the .com
    and .net SOA records from its current value of 86400 seconds (one day)
    to 900 seconds (15 minutes). This change brings this value in line
    with the widely implemented negative caching semantics defined in
    Section 4 of RFC 2308.

    There should be no end-user impact resulting from these changes
    (though it's conceivable that some people have processes that rely on
    the semantics of the .com/.net serial number.) But because these
    zones are widely used and closely watched, we want to let the Internet
    community know about the changes in advance.

    Matt
    --
    Matt Larson
    VeriSign Naming and Directory Servic
  • Fifteen minutes? (Score:5, Insightful)

    by semaj ( 172655 ) on Thursday September 09, 2004 @08:44AM (#10199412) Journal
    From the linked NANOG posting:
    "At the same time, we will also change the "minimum" value in the .com and .net SOA records from its current value of 86400 seconds (one day) to 900 seconds (15 minutes). This change brings this value in line with the widely implemented negative caching semantics defined in Section 4 of RFC 2308."
    Doesn't that mean they're updating every fifteen minutes, not every five?
    • Re:Fifteen minutes? (Score:3, Informative)

      by frozen_crow ( 71848 )
      no, it does not. it just means that if a resolver receives a "no such name" response from one of the com or net nameservers, that "no such name" response will only be cached for 15 minutes instead of a day.
    • by bfree ( 113420 )

      It means that dns servers which act like bind4 and bind8 will set the default Time To Live (TTL) for resource records without explicit TTL to 15 minutes. Servers which behave like bind9 will use this as the negative caching value for the domain, meaning that if it requests an ip from a domain which doesn't exist it will cache the result for 15 minutes. In effect this should mean that the actual root dns servers will be updated every 5 minutes, but someone looking for the domain (by normal means as oppos

      • Re:Fifteen minutes? (Score:3, Informative)

        by bfree ( 113420 )
        Ooops, it's not quite as described above! The root servers aren't being updated any quicker, it's just the .com and .net servers. It doesn't impact on the above though as the root servers just hand out the ip addresses of the authoritative servers for the top level domains, so for a non existant domain name the root servers will behave just the same as an existing domain name in the same tld.
  • by Compact Dick ( 518888 ) on Thursday September 09, 2004 @08:46AM (#10199418) Homepage
    It's about time the switch was made -- here's why ISO 6601 is the way to go [demon.co.uk].
  • Root servers? (Score:5, Informative)

    by bartjan ( 197895 ) <bartjan.vrielink@net> on Thursday September 09, 2004 @08:55AM (#10199467) Homepage
    These faster updates are not for the root servers, but for the .com/.net gTLD servers.
  • 2038 fun (Score:3, Insightful)

    by martin ( 1336 ) <maxsec.gmail@com> on Thursday September 09, 2004 @08:57AM (#10199477) Journal
    Oh great so now DNS gets potential issues with 32 bit time-since-epoch problem

    Brilliant move...:-(

    What was wrong with sticking extra hour/minutes digits in the serial number - no y2k style problems at all....?!?

    ie YYYYMMDDHHmmNN ??
    • that would make the digit string too long.

      it doesn't really matter anyway, since zone serial numbers are allowed to wrap. secondaries understand how to handle this event as well, so there's no need for admins to step in and do anything in such cases, either.
    • Re:2038 fun (Score:3, Interesting)

      by gclef ( 96311 )
      They just said they were encoding the serial number as the seconds since epoch. They never said anywhere how many *bits* they're using to measure that. In fact, since the serial number is a free-form text field, there's not really any way to overflow that. The epoch overflow shouldn't affect this.
      • I know it's bad form to reply to my own post, but I was semi-wrong, so I should fess up to it. RFC1035 states that the serial number field is 32 bits, but can wrap. The exact text is:

        SERIAL The unsigned 32 bit version number of the original copy of the zone. Zone transfers preserve this value. This value wraps and should be compared using sequence space arithmetic.

        So, there still isn't an epoch problem, but for a different reason.

      • Ok I mean there is the potential for 32 bit issues, depending on how well the DNS servers (bind, tinyDNS etc) handle the serial number once its converted from a text string to a number..

        just means one more risk/piece we have to check for when the epoch time rolls over the 32nd bit...
      • Re:2038 fun (Score:5, Informative)

        by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Thursday September 09, 2004 @10:42AM (#10200515)
        I have no idea where people got the idea that the serial number is a text field. It is a simple 32 bit integer. However, it is supposed to be compared using "sequence space arithmetic". This has been defined in RFC 1982 [sunsite.dk]. Basically it means that overflows are fine, as long as no secondary nameserver keeps really old revisions around. So if you make a secondary for the .com zone now, unplug it for 40 years, and plug it in again, it may fail to get the latest zone.
    • Right, because YYYYMMDDHHmmNN can fit in a 32 bit integer with no problems at all.

      4294967295 (max unsigned 32-bit number)
      20040909090201 (sample of YYYYMMDDHHmmNN)
      • fair enough...that'll be the reason why they didn't use that method then!

        My point is that the 2038 issue now has the *potential* to effect DNS more than it did before..

        Of course in the next 34 years we'll be using 128 bit (or larger) numbers anyhow :-)
        • True, but even 64 bits will be able to count seconds until well after Sol has become a red giant and Earth has been incinerated.

          Besides, we all know what a big deal Y2K turned out to be after all. Y2.038K will be an even smaller problem, since a) there is no user-interface for entering those numbers and b) except on the integer-size level, they are perfectly backwards compatable, ie sending you a string "1058382094" is a valid time, 32-bit or 64-bit or 4096-bit.

          And the integer size issue needs to be dealt
  • Hell Yeah! (Score:4, Interesting)

    by CptTripps ( 196901 ) on Thursday September 09, 2004 @09:39AM (#10199829) Homepage
    This is something that should have been taken care of YEARS ago. It'll make it a LOT easier to switch people over to new servers/change IP addresses and such.

    Can't wait to go......switch some IP addresses.... ::: not neerly as exciting when you type it out like that :::
  • Do they have a web site yet?
  • Death, Taxes and DNS Propagation Delay.
  • I registered a domain last week w/ godaddy.com, and was quite suprised when it was available within about 10 minutes. The domain went to the correct host from a variety of ISPs and PCs -meaning it wasn't just my ISP or my PC. Any chance this system could already be in place?
  • Wow. I changed an MX record using Verisign's (NetworkSolutions.com) website about 30 minutes ago. I received an email through the new server 10 minutes later. I received email from the sender yesterday, so the MX record should have been cached at their company, and the MX record was changed from one ISP to another. I did not expect any results until sometime tomorrow.

    ---
    I still use Verisign for my domains. It was inertia; I had my domains there, so I continued adding domains there.

    I almost switched
  • When I setup my little sister's website the registrar was pretty quick with their side. I think I had the domain and DNS records setup in ~2 hours. And it was a weekend. But of course those entries didn't propogate up and down to my ISP for ~48 hours. So that's a total of ~50 hours to conceive a site and have it running for the world to see. So will this be any different now?

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...