Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

ICANN, National Registrars Still Feuding 175

Damalloch writes: "The BBC website has this story about the EU's concern over ICANN's refusal to make guarantees about root server stability. Domain name registrars such as Nominet are threatening to withhold payment of ICAAN's fees unless something is done to reassure them. So far ICAAN has remained stubborn because of the huge lawsuit potential if a root server were to go down but with the possibility of having their income reduced, they might just be convinced to do something."
This discussion has been archived. No new comments can be posted.

ICANN, National Registrars Still Feuding

Comments Filter:
  • what.. (Score:3, Funny)

    by geekoid ( 135745 ) <dadinportland@ya ... .com minus punct> on Tuesday January 15, 2002 @01:01PM (#2843229) Homepage Journal
    ..no link to the root server? how can we /. it?

    this was a joke.
  • Well yes, but... (Score:5, Interesting)

    by johnburton ( 21870 ) <johnb@jbmail.com> on Tuesday January 15, 2002 @01:05PM (#2843273) Homepage
    But if one server went down wouldn't the requests just go to the other root servers instead? Isn't that how DNS works?

    So presumably they've got decent machines and power supplies and connections for each server. And so the chance of one going down is quite low. The chance of enough of them going down at the same time to cause disaster has to be vanishingly small. If it's too big, add a few more servers.

    Unless they include the possibility of them being hacked I suppose. But then they could just use several different operating systems and name server software to hugely reduce the chances.

    I'm not sure I'm convinced that this is really the reason they won't give any guarantees, it seems like a reasonably safe thing to do to me.
    • Re:Well yes, but... (Score:5, Informative)

      by RollingThunder ( 88952 ) on Tuesday January 15, 2002 @01:20PM (#2843417)
      It would (go to another root) - but if these systems are already running close to capacity, then that may be enough to cause the next server to choke, crash, and the next server will fall even faster.

      It's a scenario much like the AT&T switch fiasco, where a seldom exercised chunk of code took out one server. Once one server was down, the others took more load, which, coupled with the fact part of the problem was a live switch receiving a "I'm back!" message while under heavy load, caused more switches to go. Cascade failure all the way.

      After reading the article, I'm actually rather surprised myself. These systems must chew a ton of bandwidth, but it seems ICANN doesn't pay for it? Not to mention that all but three are in the US - isn't that going to oversaturate the cross-oceanic links?

      I think I'm definately with the registrar organizations - ICANN should be having contracts in place to require certain things, rather than a wink and a nod and a handshake.
      • DNS servers cache requests so the vast majority of requests are probably to a few common addresses and never see the root servers.

        So the bandwidth is probably not too bad.
        • There's enough secondary servers and varied domains that even with said caching, the roots are likely under heavy load.

          Don't forget that in-addr's go there too.
        • Matrix net carries some interesting statistics regarding TLD availability here [matrix.net].

          Reading through the page will give you an idea of the bandwidth matrix has at their disposal. The fact that most TLD servers are still 100+ msec ping on average would indicate, IMO, that those servers are under load.

          Cheers,
          -- RLJ

      • by MemeRot ( 80975 ) on Tuesday January 15, 2002 @01:39PM (#2843587) Homepage Journal
        A faulty version of software was released. And yes the fault was buried waaay down in a giant case or if/elseif statement. Normally no big deal, right? Just roll back. But they had things set up so that any machine connected to another would poll it for the version of software it had. If what it connected to had a newer version, it would download that and then hand it off to all its fellows. So by the time the bad code triggered and they realized they had a problem it had already spread virus-like across the whole network. Going back to the older version one one machine was futile because as soon as it booted up it would connect to other machines and download the flawed software.

        They had to eventually take their old version, give it a new, higher number, and then compile and release that. So that that 'feature' once again became a feature and not a bug. Many lessons to be learned.
        • I know, just didn't think it was relevant enough to go into detail. :)

          Specifically, the break statement was exercised when (iirc) over 50 incoming calls were being handled in the same second at the same time that the switch recieved a "I'm back!" message.

          Once one legitimately faulted out, it tripped the code in one other, which upped the load on it's neighbors, and when it came back, it took three out...

          This is covered very nicely in the Hacker Crackdown, which I think was placed online in an act of surprising generosity. I still prefer the paper though. :)
      • The nameservers are near capacity at the moment, however since name servers effectively load balance it's rather difficult to notice. Theres a fascinating paper about it here [mit.edu] The root/gTLD name servers are in a lot worse state than most people think. It is possible in a few years that they become too overloaded and just melt down. Imagine the internet without a functional DNS :)
        • Well yes that would be pretty bad. Shame I can't read the article as it's a .ps file which I have no way of reading.
          But I can think of a number of ways to help with this.

          For example, set up a new set of root name servers. Make sure that the database is duplicated to those machines at more or less the same time as the original machines, and then pursade of of the big providers. AOL or someone to use those as their root name serves instead. They would get better service, having dedicated machines, and it would lessen the load on the existing servers.

          Not an ideal solution, but I'm sure it would work to reduce the load for a while.
    • "I'm not sure I'm convinced that this is really the reason they won't give any guarantees, it seems like a reasonably safe thing to do to me."

      I'd have to say because they're cheap bastards and if they were to make such a guarentee, their insurance rates would spike.
  • Run their own? (Score:2, Interesting)

    by hogsback ( 548721 )

    What are the obstacles to Nominet, say, running their own root server.

    They must already have bandwidth and physical security ... what else would they need?

    More redundency, especially outside the US, can only be a good thing, right?
    • Re:Run their own? (Score:4, Informative)

      by jbf ( 30261 ) on Tuesday January 15, 2002 @01:08PM (#2843308)
      They'd need ISPs who run DNS servers for their clients to point to their root servers. This is somewhat nontrivial.
    • Re:Run their own? (Score:1, Interesting)

      by Anonymous Coward
      What are the obstacles to Nominet, say, running their own root server ?

      Configuring every DNS on the planet to know about it ...

      • Since DNS is a hierarchy, wouldn't it be just the DNS servers at the next level down that need modifying.

        How many of these are there?
        • Since DNS is a hierarchy, wouldn't it be just the DNS servers at the next level down that need modifying.

          No.

          Crash course in DNS: In a typical setup, your machine asks the DNS server at your ISP (this is called a "recursive resolver" and it's really not part of the DNS hierarchy) where www.foo.com is. Then it does the following (assuming all cache misses -- in real life, not all these connections would really happen most of the time): it has a list of root DNS servers stored in a config file somewhere. It picks a root DNS server and asks where the .com DNS server is. The root tells your ISP's DNS server where .com is, and then your ISP's DNS server asks the .com DNS server where foo.com is, and then when it gets the answer to that, it asks the DNS server for foo.com where www.foo.com is. Then your ISP's DNS server passes the result to you. (Glossing over a few details.)

          Queries start at the root, and work their way down the hierarchy.

  • by Bonker ( 243350 ) on Tuesday January 15, 2002 @01:10PM (#2843325)
    Firstly and foremost because it's a U.S. entity who pretends to be an international entity and the Internet quit being a U.S. entity a long time ago.

    I suspect that China will be the first to set up its own root DNS servers and start issuing non-ICANN-approved domain names, probably in competition with ICANN and Versign. Other's will soon follow. Soon every big ISP both in the U.S. will see the need to have its own root DNS server. Of course there will be some cooperation required between the different DNS roots if their customers are going to be happy. Hopefully, this new cooperation will end the monopoly ICANN has over the administration of the Internet, leaving unsportsman like players like Versign standing out in left field, wondering why nobody is tossing them the ball anymore.
    • Actually, I bet it ends up falling under the WTO treaties, and the WTO will apoint a board to control the Root DNS system. son't be surprised when VerSign ends up with members on that board.
    • Yeah, after the last story about DNS [slashdot.org], I started considering switching to one of the alternate DNS systems. Hmmm...I wonder if uber.geek is taken?

      The claim that they're worried about lawsuits seems silly to me (at least to some degree). They can at least try to increase their redundancy, security and stability--then just put some blurb in their agreements that they don't guarantee stability (just like many modern corporations do). The article said they weren't even paying the companies/organizations that ran the root level servers! Isn't this a big part of what they are supposed to be doing? What kind of crap is that?

      I have to wonder if this is just some ploy so that the players can stuff their wallets with ICANN money...

    • Firstly and foremost because it's a U.S. entity who pretends to be an international entity and the Internet quit being a U.S. entity a long time ago.

      The catch for ICANN is it needs legitimacy to enforce policy both in the U.S. and especially abroad, but it can't gain global recognition and respect without enforcing policy and taking responsibility.

      From the article:
      Nigel Roberts, head of the Channel Island domain registry and a member of Icann's country code committee, said the row was leading people to question just what Icann was for."The issue is not the amount of money," he said. "It is about the role that Icann has."

      I think ICANN could help resolve this by giving guarantees to take an active role in use and abuse of the root servers. They could more closely track and monitor root server usage, make recommendations and requests to providers on where to put root servers to improve DNS efficiency and reliability. ICANN could also publish data on who's root servers are performing well and who's are not, shaming poor providers who create bottlenecks into better service. If ICANN performs those duties well and is responsive to concerns like those in the EU, it will become a more effective body.

      Regards,
      Reid

    • Of course there will be some cooperation required between the different DNS roots if their customers are going to be happy.

      Didn't AOL try locking their userbase into AOL chatroom-sandbox and it worked for a while?

    • Charge for a subscription to a root DNS server. One can make money off both ends: charge the domain name holder for the reservation on your server, AND charge the end user a yearly or a per use fee for DNS resolution. The latter requires some form of micropayment, but it's probably quite workable.

      The benefit to the end user is that one could subscribe to a completely Disne-fied root that would have only family-friendly sites, whereas another server would have all those wacky pr0n sites you could ask for. Somebody would probably even have a free root server out there based on his/her special interest groups.

      Heck, you could even charge for translating addresses to other systems. No need to worry about foreign DNS servers - if they don't pay up, they don't get access to your root.

      Some people would still get around the whole thing by just typing in the octet directly, but that would be such a small percentage that it wouldn't even matter.
    • If China sets up it's own root servers, I'll be the first to have my mail server do a lookup to see if the root server of the sender exists in China to block access.

      That will give me about 95% more bandwidth.
    • I suspect that China will be the first to set up its own root DNS servers and start issuing non-ICANN-approved domain names,

      First? It is already several years too late for China to somehow be first. Alternate roots have been around for a long time. I use one, you can too. [unrated.net]

    • Several people already host their own root servers with non-ICANN-approved TLDs. The problem is getting ISPs to point to those servers to see the domains.

      Of course, China prolly has an advantage that it can force all ISPs to point to its servers...
  • by mrroot ( 543673 ) on Tuesday January 15, 2002 @01:13PM (#2843352)
    Almost every time anyone looks for a webpage these root servers are consulted.

    Surely this cannot be true... Don't DNS servers cache address resolutions?
    • unles you enter a computers addres in hex, in which case it goes straight to the system.
      There are also people who maintain there own DNS system, albeit smaller and personalized.
      But in general that is true.
      • Hex? Are you referring to the MAC address of a machine? That's only really valid on local lan. To route to a machine on the opposite "side" of the internet you need its IP number (thinking dotted-decimal), and DNS is the services used to obtain the IP number from readable names like "slashdot.org".
        • An IP number is a 32-bit binary number. It can be represented in the familiar dotted-decimal format. It can be represented as a hex number. It can even be represented as a huge decimal number. Take your pick. If you convert the dotted decimal to a regular decimal number (and that isn't just taking out the dots and stringing the numbers together), and type that into your browser, you will get to the same destination as the dotted-decimal number. I don't know if the browsers will behave the same if you enter it in hex, though.
        • for an example, see this page on decimal URLs [x42.com]
    • you are correct, unless the clients dns server doesn't have the answer, then it looks to the root servers, unless the dns server is set to forward queries, then it'd forward them to another dns server, who might or might not have the answer... "Almost every time anyone looks for a webpage these root servers are consulted..." is hyperbole.
    • Yes, DNS servers do significant amounts of caching. When I worked at a small ISP a few years ago, they cached DNS lookups for a week. I believe that almost everyone caches for 48 hours or less nowadays.

      Plus, I'm sure that at least 10% of normal web browsing comes straight from the user's cache on their hard drive, so the internet isn't accessed at all.
    • Looking at my DNS config files, it looks like each domain can set it's own TTL (Time To Live) duration for its current settings before it needs refreshing. The default setting is 3 hours, which is what I presume everyone normally leaves it at.

      Phillip.
      • At the few places I've worked, the policy's always been that TTL = expected worst-case response time from the networking group plus a fudge factor.

        So, if DNS goes down at 10:00pm on a Friday, people (who have the addresses cached) can still get to the machines until the hung-over networking crew logs in to check things out the next morning.

        They'd bump the TTL way down, on the other hand, when major machine moves were planned.
  • the EU's concern over ICANN's refusal to make guarantees about root server stability.

    It sounds as if all that's required is a standard Service Level Agreement. The kind of thing that's standard through most big corporates, and has a clause along the lines of "we guarantee 99.5% uptime, if service drops below this we pay £x.xx per quarter percent below.".

    It seems that it's the refusal to provide something like this, rather than technical worries, that are underlying this dispute.

    Cheers,
    Ian

  • by dschuetz ( 10924 ) <david&dasnet,org> on Tuesday January 15, 2002 @01:15PM (#2843373)
    From the article:
    However, many of the servers are looked after on an ad hoc basis by very different companies. Icann does not pay the wages of the people that oversee the servers, nor has it signed contracts with the organisations that look after the root servers to establish service levels, standards of reliability or ecurity.

    If ICANN can't legally hold accountable the people running the root servers, then there's no way they'd provide any guarantees to anyone. That much makes sense.

    Furthermore, the root servers (again, from the article, don't flame me if I'm missing a nuance or two) don't really DO much. They just tell you where to go to get info for each of the top-level domains. Not exactly a whole lot to running one of these other than keeping it from crashing.

    My question, though, is why is anyone worried about a root server crashing? There are 13 of 'em. Wouldn't your DNS server ask someone else if the "preferred" root server suddenly went Tango Uniform? Are there backup root servers out there to jump in? Ways to route around the damage, as it were?

    What I still find amazing is that ICANN hasn't managed to take full physical and financial control of all the root servers. When I was in school, I remember thinking it was cool that we had one of the root servers (terp) in my building. It was amazing to see how a loose group of unrelated institutions had somehow set up a reliable, workable, DNS system.

    In fact, it sounds like this is still the case, somewhat. Do these root server operators have ANY contractual controls on what they do? If not, then why the hell can't we just get THEM to add new top level domains? Screw ICANN. The servers don't belong to them, they belong to the people running 'em. As long as the guys running the roots don't point .com to some other universe, they should be able to avoid getting sued into oblivion, right?

    And, if they were to do this, could ICANN even stop them? They'd have to repoint all the root.hints files across the entire globe, wouldn't they?

    Or is this the kind of Chaos that the EU is afraid of?
    • Furthermore, the root servers (again, from the article, don't flame me if I'm missing a nuance or two) don't really DO much. They just tell you where to go to get info for each of the top-level domains. Not exactly a whole lot to running one of these other than keeping it from crashing.

      What a root server doesn't isn't very hard. What is hard is keeping the damm thing running. They a high load (every DNS server in the world hits once once a day for each TLD), they get all sorts of script kiddies hitting them, and because of their profile, it's very hard to make changes.

    • If a root server goes down, there are lots of redundant alternatives. However, the posability and damage of Domain name hijacking is much more serious... This is especially true since ICANN does not even operate the root servers!!. What's stopping one of the companies that operate root servers from suddenly deciding to take over the .uk top level domain? There is probably no law or contract stopping them from doing so.

      • You're probably right. NSI hijacks people's domains all the time and doens't get in trouble. Makes me wonder if there actually is any law that prevents it. If they hijacked a TLD, I'm sure everyone would make a law against it real quick though.

  • What's new? (Score:2, Interesting)

    by zeiche ( 81782 )
    Looks like another example of a company that does not want to guarantee services they have accepted payment for. Nothing new here.
  • The real issue here is that many 1000's of companies have based their businesses on the assumption that DNS will always be available and reliable. The original intent of the DNS system was to provide a convenient service to Internet users, not to serve as a point-of-failure for the entire net.

    Why should ICANN promise to deliver something that they know they are unable to?

    What we really need is to start over with a new specification for domain names that reflects the needs of the current Internet - a systerm that can provide the security and reliability that we now depend on.
    • Great idea... but, it has taken the entire community years of fighting to agree on things like IPv6. How long until that gets implemented? Can you even imagine how long it would take to: a) come up with a new spec and b) implement it?

      Don't think in human years.... try thinking in geological time. You know... eons, epochs, and eras.
  • It sounds like they are using this as an excuse to not pay. I doubt they really care, but it is a convienant excuse to use, since they know ICANN can't come up with a solution and implement it rapidly due to politics.

    It's all about money, pure and simple.

    • Yes, it is about money.

      If your company was administering a ccTLD and ICANN comes knocking at your door for money when they can't make any assurances of your ccTLD being served to the rest of the world, why should you pay them?

      To make an analogy, ICANN is to the Internet like the UN is to an international government; they are both generally ineffective but continuously demanding an ever increasing sum of money to be able to join the party.

      The simple fact is that ICANN can't... (make any assurances) because they ultimately can't step in and takeover the root servers. Otherwise, they'll find themselves in a bigger controversy. Mind you, ICANN is no stranger to controversy.

  • Looks like ICANN just want the money without offering a guarantee of service.

    Any reason why the top level domain registers can't take over ICANNs role of handling root level DNS requests?
  • ICANN are likely to do something if Nominet stop their payments. Remove the .uk domain.
  • Maybe we should be wondering where all ICANN's money goes? According to their budget [icann.org], the law firm Jones, Day, Reavis, and Pogue [jonesday.com] gets about $734,000.00 !!!

    ICANN should be less worried about the CCtlds and focus on their own organization! The total personnel costs for ICANN are projected at $2.217 million dollars! I would like to know what EXACTLY the staff members do to deserve this type of money? ICANN is the biggest bunch of hypocrites to come along since the US Congress!

  • doesnt the root server RFC say that there must always be three times the current estimated usage (that is if 2/3 of the root servers went down the internet would still server DNS)?
  • The "F" root server, located at the Internet Software Consortium offices in Redwood City, California, is fitted with eight gigabytes of Ram and handles over 272 million domain queries per day.

    I challenge us to slashdot it!
    • It has to store a database containing 200 entries, each of which has an name (3 bytes) and a value (8 bytes)

      So, that gives us 4.4K of data, plus presumably a little program to interpret it and send the results back.

      So what do they do with the other 8 gigabytes?
      • Thats ~3150 queries per second. I imagine a good chunk of that 8 gigs is ram used to create sockets and threads that do the lookups - I also suspect that is's a heavy SMP machine, each processor with it's own ram. If there were, say, 32 processors, each with 256 megs of ram, and each processor ran (X) threads to handle requests...
        • Thats ~3150 queries per second. I imagine a good chunk of that 8 gigs is ram used to create sockets and threads that do the lookups - I also suspect that is's a heavy SMP machine, each processor with it's own ram. If there were, say, 32 processors, each with 256 megs of ram, and each processor ran (X) threads to handle requests...

          Err no, none of the memory is used for sockets and none for threads.

          DNS is a UDP protocol and there is no good reason to talk TCP to a root name server so those requests would be firewalled off to a different node.

          As a UDP protocol DNS is stateless and there is not a good reason to use threads. Ungranted requests can be cached in the network interface drivers. At least that is the way the servers running BIND function. I have not read the Nominet code but I doubt it is different.

          I don't know why Paul would have so much RAM on his box. The dotcom zone is many gigabytes but the root zone only has 200 records.

  • http://www.cisco.com/public/sw-center/sw_download_ guide/dnsfaq.html gives a list of root servers and their IP Addresses, as well as some good information about the basics of DNS.

    http://www.isi.edu/in-notes/rfc2870.txt talks about the requirements for a root server. From this:

    1.1 The Internet Corporation for Assigned Names and Numbers (ICANN)has become responsible for the operation of the root servers. The ICANN has appointed a Root Server System Advisory Committee (RSSAC) to give technical and operational advice to the ICANN board. The ICANN and the RSSAC look to the IETF to provide engineering standards.

    As such, it looks like ICANN is the only organization that can take responsibility of the system.

    section 2.3 estimates that 2/3rds of the servers could be taken out and functionality would be maintained.

    The Internet Software Consortium runs F on BIND 8.2.3 (Hrmmn... their latest release is 8.3.0 and they've noted that 8.2.5 has a security bug, and the 9 series *is* out and at the 9.2 series, does anyone else find it disconcerting that they run 8.2.3?) Does anyone know of a list of who takes care of these root servers?
    • I forgot to mention in there... The reason I found F running 8.2.3 disconcerting is that

      A) The group that keeps F is responsible for developing BIND.
      B) this group released 8.3.0 because 8.2.5 had a security bug. F runs 8.2.3

      Does that make more sense?
      • 8.2.5 has the bug. The only remote exploits I know of myself were introduced after 8.2.3

        Actually, both 8.1.2 and 8.2.3 are Very stable and secure in the 8 series.

        I personally run 8.1.2 on half of my servers (Slaves) as i dont need the newer features of 8.2 on them.

        8.1.2 is also not effected by the holes introduced in the 8.2.2 series that existed up until i believe 8.2.2p5 (But dont quote me on that patch level)
        8.2.3 was basicly a pollished version of this.

        Any 8.2 released after optentially has bugs still, adn they did not fix them in the 8.2 tree as 9.x was pending so close.

        I have no paid anymind or attention to the 9.x tree at all myself, and wont until it gets a tad more stable.

        Additionally, there are still 4.x versions that are extreamly stable and secure and running over the internets backbones.

        Just because the version is older doesnt mean it automaticly has bugs.
        Some people either know/feel more comfortable with the 4.x zone files than they do with 8.x.
        They should not be forced to upgrade if they dont want to.
        Its the same with 8.x to 9.x.

        Most of the changes are not security or stability anyways, only new features.

        --Jon
      • BIND 9 is nowhere near ready for release on the root server. It is not unusual for production systems to use older, more stable versions of the code.

        It is very likely that the root node would run a stripped version of BIND. This is certainly done on some of the nodes.

  • by tshoppa ( 513863 ) on Tuesday January 15, 2002 @01:41PM (#2843608)
    My take on the EU's beef:
    The EU believes that because the root servers are not controlled and administered by one central authority, they are unreliable.

    This is true, to an extent. Different and widely spread organizations run the root name servers, using different OS's, hardware configurations, and network connectivity.

    And this is a Good Thing
    Concentrating and centralizing the root name servers would defeat the diversity that now exists. If one goes down, the others pick up the load. If there's a fatal hardware bug in one, it probably won't affect the servers running on different hardware. And, most of all, A single business or management failure will not disrupt root nameservice.

    Whoever in the EU (I suspect it's some ex-communist beaurocrat who loves centralized authority) thinks that things are bad now should read the RFC 2870, Root Name Server Operational Requirements [isi.edu] and get a clue.

  • Yet another reason to support OpenNIC [unrated.net]

    For those who do not know what OpenNIC is, here is their description:

    The OpenNIC is a user owned and controlled Network Information Center offering a democratic, non-national, alternative to the traditional Top-Level Domain registries.

    Users of the OpenNIC DNS servers, in addition to resolving host names in the Legacy U.S. Government DNS, can resolve host names in the OpenNIC operated namespaces as well as in the namespaces with which we have peering agreements (at this time those are AlterNIC and The Pacific Root).

    Membership in the OpenNIC is open to every user of the Internet. All decisions are made either by a democratically elected administrator or through a direct ballot of the interested members and all decisions, regardless of how they are made, within OpenNIC are appealable to a vote of the general membership.
  • Joh Postel was the man. Why not vote for another pontificate?
  • What a joke... (Score:3, Interesting)

    by Rev.LoveJoy ( 136856 ) on Tuesday January 15, 2002 @01:50PM (#2843671) Homepage Journal
    So a couple years ago Jon Postel (RIP) can rediredct all authoritative root server queries to his lab PC and the internet is no worse for the ware, but ICANN, with substantially more resources, redundant locations and dozens of authoritative root server, cannot guarantee that some subset of them will always been online?

    Huh?

    What did I miss? We all have to meet requirements, whether your a 5 nines shop (god help you) or not with respect to uptime and service availability. Why should ICANN be any different?

    Cheers,
    -- RLJ

    • A couple of years ago certian destabilizing influences were not on the net. Today, the net is littered with cracked coppies of win2k on cable modems, not to mention serving "the enterprise" whatever that is. The venerability demonstrated by all those crippled machines did start to desabilize routers all around the world. You did not miss all the fun, did you?

      Unless people get smart and dump M$, it's hard for anyone to gaurantee any service. It's kind of like planning to meet someone on Burbon Street for Mardi Grass, your voice will be lost in the noise. All the resources in the world won't protect you from irresponsible net usage.

      By the way, 13 is 1.08333... dozen.

  • ICANN has already specified this, in RFC-2870. [http://www.isi.edu/in-notes/rfc2870.txt]

    /quote/
    2.3 At any time, each server MUST be able to handle a load of requests for root data which is three times the measured peak of such requests on the most loaded server in then current normal conditions. This is usually expressed in requests per second. This is intended to ensure continued operation of root services should two thirds of the servers be taken out of whether by intent, accident, or malice.
    /quote/


    I think that is the guarentee.
  • by cluge ( 114877 ) on Tuesday January 15, 2002 @01:55PM (#2843723) Homepage
    Most people realize that the root servers can be taken down. There have been several articles about this very concept (see http://www.theregister.co.uk/content/archive/22851 .html [theregister.co.uk] for example).


    Given the nature of how DNS works, and how the root servers are run, how can ICANN guarantee anything? (it can't) If they do provide some sort of guarantee then haven't they added a financial incentive for someone to DOS the root servers?


    The Europeans are asking for something that cannot be delivered (currently), and if they get it the chances increase that someone will DOS the servers for some financial gain. (i.e. your server went down, I now don't have to pay you x dollars). If I was ICANN I wouldn't want to sign an agreement. It may be time for ICANN to change the way it does business, and the "ad hoc" nature that the root servers are maintained may have to change. DNS the protocol itself needs to be very carefully looked at as well.

  • The root servers should be owned by a formal co-op, owned collectively by everyone who has a domain name registered, and run by an elected board with a hired staff. This would be a "producer co-op", like Agway [agway.com], the giant co-op for farmers, rather than the more common consumer co-op. This would bring together the interests of the people who need the root servers to stay up, the domain owners, and the ownership of them.
  • by 3Suns ( 250606 )
    Are ICANN and ICAAN interchangable now?

    ICANN do no wrong.
  • Does anyone else see a striking similarity between the ICANN and Windows logos? Also the name, "I can", after Gates was supposedly denied the ability to buy the internet just works my paranoia nerve.
  • ICAAN is unABEL to guarantee server stability.

    When asked for comment, a representative stated, "What? Am I my server's keeper?"

    (note misspelling of ICANN in the article)
  • Thats one that has always puzzled me? root.hints contains the list of root servers. and it doesn't have through Z in the current naming convention, so why can't be have more root servers. I mean esspecially with the price of hardware, and such being what it is, it shouldn't be that hard to set up additonal root servers. I mean if the DNS howtos of the world just included a line like, "Your root.hints now includes the ICANN servers, add these additional listings for the other servers"? I partially agree that there does need to be a central authority for all this, but I do think ICANN is handling it in the best way. There is a need for some control so that two people don't try to register the same name with different authorities, and create a conflict. However, I also think its should be a case of first come first serve on getting the names, and the trademark game should not be a consideration.
    But I could be completely wrong because I so think, that DNS records should also include rudimentry routing info that helps the rest of the world find that last hop to my network since my ISP will not route for me. And I also think that DNS should have the ability to have a PORT record so when doing a DNS lookup the person looking me up can be directed to service ports within my IP so www.foo.com can live on port 8090 for instance because cable modem companies sometimes block port 80. That way when www.foo.com gets looked up the client not only gets the IP, but the port on the server to connect too, so users don't have to have stupid IPs like http://www.foo.com:8090, DNS takes care of passing the 8090 as part of the lookup reply.
    I am working on the RFC for this since there doesn't seem to be one.
    • Uh, your post shows that you don't know the difference between the internet and the WWW. Not everything runs on port 80. Domain names have nothing to do with ports. Your domain name points you to an IP, which identifies your machine. You then connect to a port on that machine. The port you connect to is either a) identified by convention, such as port 80 = http. If the server is running a server on non-standard ports, it is the responsibility of the server to redirect clients to the correct port.
      • Actually I do, I was using port 80 as an example of what i was talking about....

        Perhaps a closer read the next time, instead of just skimming for flamebait.

        To reiterate and expand...If a user such as one on a cable modem wants to have to have a WEB site, and the ISP blocks the Port 80, if DNS had the ability to pass port information with the DNS reply then the user could have www.foo.com, as the URL leading to the site instead of having www.foo.com:8090.

        Another example, I have one IP I want two sites, but they live on different boxes. The same rational applies, one could live directly on 80, and with DNS carrying the Port # the the other could live on 8090, and they can both have simple names www.foo.com, www.bar.com,(I am actually running into this problem now, as I have a domain already, and my girlfriend would like to have one as well)

        I am the DNS admin more several Internet domains, and have been for 5+ years in a professional capacity. I have been on the internet in one cpacity or another for 10+ years. I remember a time before the web didn't even exist.
        • I agree with Eristone, I'm afraid. If you can't figure out how to do port forwarding for subdomains and virtual hosting...

          I'm still wondering what ports have to do with DNS, and why you'd want port information attached to your DNS if you're running more than one service. Even assuming that this wasn't doable in other ways that didn't require major changes to DNS, 99% of the time services will be running on thier standard ports on legit servers any. (BTW, your ISP blocks port 80 cause running servers is against the AUP. That means it's not a legit server)

          • OK, I CAN do all that, but I am trying to solve problems for average users...I know geeks trying to help non-geeks is generally frowned upon, but sue me I have a big heart and want to help them get solutions. See the answer above for yet another expanded description of what I am attempting to do for these people.

            BTW, I don't believe that ISP's should be able to limit what you do with the Bandwidth, call my desire to help people who have their port 80 blocked civil disobidiance...

            I am with this conversation because obviously you people have so limited a view of things that you can't open your minds enough to understand what it is that I am trying to accomplish.
            • Call me wacky, but I just don't think that DNS (a way of identifying a machine) is the proper place to be showing ports (a way of identifying a process). For what it's worth, theres at least one company that provides everything you want to do in an easy-to-use solution for home users, without needing changes to the DNS system (yeah, watch that happen).
    • The reason why there are only 12 or 13 root servers is based on several factors.

      The most basic factor is that the DNS specification imposes an obsolete 512 byte limit on the size of UDP DNS packets. (DNS can run on TCP but the overhead is much higher than with UDP.

      Since reply packets often contain many resource records, and DNS names can be up to 255 bytes each, you can see that one can brew up server names that would strongly press that 512 byte limit even with two servers. Fortunately, server names are usually not all that long.

      DNS name compression comes into play to help, and that situation has improved since most root servers now support root-servers.net as the right hand part of their names.

      Internationalization of domain names under the ACE rules coming out of the IETF will work the other way - internationalized server names will tend to be longer than than the a.root-servers.net form that we see today.

      Now, just because we see one NS record in a list of servers doesn't mean that there is only one computer there - or even that it is in one place. Many servers are actually clusters that are hiding behind load balancers.

      And with IP "anycast" technology (essentially a way of establishing multiple instances of the same address block by using localized more specific route announcements) we can have as many servers as we want at the same apparent address but located in widely scattered locations around the world. The .biz servers are, I believe, handled this way.

      Oh, by-the-way, don't fall into the belief that the names/addresses listed in the "hints" file are the root - those addresses merely serve as a way to find a single root server. That server, in turn, will provide the actual set of root servers. That's why the hints file is called "hints" - it's just there to get the ball rolling.
      • by huskymo ( 60304 )
        There's no reason we have to use whichever ACE becomes the standard in the domain names of root name servers. We sacrificed the old domain names of the root name servers (e.g., ns.nasa.gov) to the greater good of better domain name compression years ago.

        The countervailing force is EDNS0, which will allow 4096 byte UDP-based DNS messages. And BIND 8.3.0, recently released, supports EDNS0. f's already running it. Once 8.3.0 is fully deployed on the roots, I think additional root name servers are just a quick hack away:

        - System query without EDNS0: You get 13 root name servers
        - System query with EDNS0: You get more
    • And I also think that DNS should have the ability to have a PORT record so when doing a DNS lookup the person looking me up can be directed to service ports within my IP so www.foo.com can live on port 8090 for instance because cable modem companies sometimes block port 80.

      Been there, done that. It is called the SRV record and it works in the same way as the email MX record.

      Not supported inany of the browsers yet, but is used extensively in W2K for other purposes.

      • Which is sort of the point I was trying to make origially, because SRV records aren't fully supported. There needs to be an agreement on making something of this sort happen that will allow all clients and servers to respect the information coming back....
  • ...that it's a bureaucracy looking for a purpose.


    I know, I know, what a troll...but sometimes I get so fed up...


    -h-

  • Wildly inaccurate (Score:2, Informative)

    by Farce Pest ( 67765 )
    The computer of someone searching for www.bbc.co.uk for the first time would consult the closest root server and would find out that Nominet handles the database of net domains ending .uk.

    The root server then would pass on the net address of Nominet to allow the searching machine to find the exact web address of the BBC website.

    This is totally inaccurate. If you are searching for www.bbc.co.uk, your computer asks the local DNS cache (listed in /etc/resolv.conf, unless you have some retard OS). That cache then asks a root server for www.bbc.co.uk (if that information has not already been cached). This produces a referral to the .uk nameservers. The process continues for co.uk and bbc.co.uk as necessary. Note that it does not ask the closest root server, because the cache has no way to know what this is. BIND uses the "fastest" server (until it overloads from all the other BIND servers using this strategy); djbdns's dnscache picks one at random.

    One way to avoid delays at the root servers is to run your own local root server, and periodically download the root zone. This [open-rsc.org] shows you how to do it using the ORSC root zone, but you can do it with the standard root as well. You can AXFR it from one of the root servers. Then you tell your local cache to use your local root as the root server.

  • I wrote a document about some simple steps that could be taken to improve DNS security before ICANN's meeting last November.

    [cavebear.com]
    http://www.cavebear.com/rw/steps-to-protect-dns. ht m

    Don't let the fact of 12 or 13 servers lul one into a sense of security - they are all fed data from the same source, and if that source is corrupted, then all the root servers will be corrupted. And that's not a hypothetical - the entire .com top level domain disappeared for a few hours in 2000. (Most people didn't notice this because of the damping provided by DNS caching, but it would have become really bad had the situation continued for a few more hours.)

    Also, because all of the root servers run a nearly common code base, they are potentially vulnerable to a common weakness.

    In addition, one need not bring down a server to take it off-line, an attacker need merely saturate the network in the vicinity of a target server so that no good traffic can get through. An even scarier notion is that of corruption of Internet routing so that packets flowing to DNS server addresses are forwarded out router interface null0.
  • by new500 ( 128819 ) on Tuesday January 15, 2002 @07:40PM (#2846022)

    . .

    If I read this correctly, the reason why the EU local registries don't have their own root servers, and hence control over service levels is a historical issue [isc.org].



    Excerpting from the Internet Software Consortium's page, linked above - and please allow me to state that such a reference is anecdotal rather than given fact,

    We then discussed potential candidates and found no volunteers in the AsiaPacific region, none in Africa and only one in Europe.


    The "one in Europe" btw was NOT Nominet or another registrar, it was a guy working for LINX, the London INternet eXchange.

    There's good reason for this, as late as the early 1990s, Europe was still thinking that X.500 was the way forward, and a large amount of resources from universities, telcos and local standards agencies was devoted to "interoperability" testing of X.500 directory services. What really happened was the standards lagged the implementations so badly that vendors and implementors went ahead and did their own thing, creating, as anyone who has dealt with X.500, a nightmare for inter -vendor interoperability. That created the space in which the InterNet and DNS / BIND could flourish. FWIW, LDAP is a (nor precisely, so please don't flame me, too large a subject for absolute accuracy here) derivative of X.400, itself a cut down form of X.500. Novell's eDirectory, which runs some of the largest sites (CNN.com, AOL messenger services) is itself a souped up LDAP implementation.


    You can find a brief overview of X.500 and what the "authorities" in Europe were up to as late as 1990 and beyond in this history of X.500 [salford.ac.uk]


    I'm British born myself, but this all seems to me to be Euro - Whining. Particularly the UK's Nominet making an issue of this is absolutely BS. Nominet has, IMO, very sharp practises. If you "buy" a domain in the UK (domain.co.uk) via an ISP, Nominet maintains a "tag" linking your domain to the "provding" ISP, until another ISP takes it over. Domains _never_ go back into circulation when they expire. Nominet refuses, on the whole, unless you threaten or cajoule them with considerable effort, to "release" your domain because it states it will not get involved in contractual disputes between you and your ISP. Most UK ISPs make contracts which lock you in to your services and charge a considerable and hefty severance fee, usually buried in the small print. You _can_ get a "Neutral Tag" applied to a UK domain, if you pay GBP £80 for two years, which fee goes back to the ISPs who are members of Nominet, which is a for profit company, limited by guarantee, a rare form of UK company which offers very lax statutory reporting. Even though you _can_ do all this, I've had several clients now who've complained to Nominet, e.g. when their ISP is TU and no longer provides service, and Nominet tells them anyway that they can only deal with an ISP who is a member of Nominet. Obviously that's BS. But you can't register a domain in the UK for .co.uk and run your own DNS and maintain it under your own authority without a *lot* of expensive hassle, and possibly an attoney. You could hire me, of course, but this kind of work sucks, so I wouldn't offer it generally.


    Sorry for that rant against Nominet, but it's Crocodile Tears time again and minus several million points for the Brits, as per usual.

    Please follow the links above, investigate yourself . . .

    • Nominet has, IMO, very sharp practises. If you "buy" a domain in the UK (domain.co.uk) via an ISP, Nominet maintains a "tag" linking your domain to the "provding" ISP, until another ISP takes it over.

      Oh, for heaven's sake!

      Anyone can be a Nominet tag holder [www.nic.uk]. I'm a tag holder myself. You don't have to be an ISP. You don't have to run your own DNS. If you want complete control over your domain, just register your own tag.

  • Some issues (Score:4, Insightful)

    by Zeinfeld ( 263942 ) on Tuesday January 15, 2002 @10:27PM (#2846584) Homepage
    The point that folk seem to miss is that the root name server IP addresses are hard coded into the infrastructure. To change the root servers you have to either wait for everyone to redeploy BIND or get an address reassigned somehow. There is a hard limit of 13 servers that is set by the length of an ethernet packet, the size of the records and the need to guarantee that the packets don't fragment.

    Reassigning a root server address is hard because the operator likely has other machines in the address block whose numbers would also have to change.

    The EU concern is not irrational, it is pretty wierd that the root zone is essentially a volunteer effort given that the costs are not negligible and the responsibility immense.

    Against this however there is a major political issue at stake. The root operators are in effect the arbiters of the DNS. If ICANN gets too big for its boots they are a check on it.

    The other issue is that there are very few companies that could credibly manage the root zone on a contractual basis. It is one thing to run a server on a volunteer basis, quite another to provide a service guarantee.

    One thing that is in the pipe that may well change some of the concerns, in particular anycast addressing which allows multiple servers to sit on the same IP address. The packets are routed to the 'nearest' machine. That will allow the deploment of additional root servers. It will also address some of the denial of service concerns.

BASIC is to computer programming as QWERTY is to typing. -- Seymour Papert

Working...