ICANN, National Registrars Still Feuding 175
Damalloch writes: "The BBC website has this story about the EU's concern over ICANN's refusal to make guarantees about root server stability. Domain name registrars such as Nominet are threatening to withhold payment of ICAAN's fees unless something is done to reassure them. So far ICAAN has remained stubborn because of the huge lawsuit potential if a root server were to go down but with the possibility of having their income reduced, they might just be convinced to do something."
what.. (Score:3, Funny)
this was a joke.
Re:what.. (Score:3, Funny)
Re:what.. (Score:2)
Well yes, but... (Score:5, Interesting)
So presumably they've got decent machines and power supplies and connections for each server. And so the chance of one going down is quite low. The chance of enough of them going down at the same time to cause disaster has to be vanishingly small. If it's too big, add a few more servers.
Unless they include the possibility of them being hacked I suppose. But then they could just use several different operating systems and name server software to hugely reduce the chances.
I'm not sure I'm convinced that this is really the reason they won't give any guarantees, it seems like a reasonably safe thing to do to me.
Re:Well yes, but... (Score:5, Informative)
It's a scenario much like the AT&T switch fiasco, where a seldom exercised chunk of code took out one server. Once one server was down, the others took more load, which, coupled with the fact part of the problem was a live switch receiving a "I'm back!" message while under heavy load, caused more switches to go. Cascade failure all the way.
After reading the article, I'm actually rather surprised myself. These systems must chew a ton of bandwidth, but it seems ICANN doesn't pay for it? Not to mention that all but three are in the US - isn't that going to oversaturate the cross-oceanic links?
I think I'm definately with the registrar organizations - ICANN should be having contracts in place to require certain things, rather than a wink and a nod and a handshake.
Re:Well yes, but... (Score:2)
So the bandwidth is probably not too bad.
Re:Well yes, but... (Score:2)
Don't forget that in-addr's go there too.
Re:Well yes, but... (Score:2)
Reading through the page will give you an idea of the bandwidth matrix has at their disposal. The fact that most TLD servers are still 100+ msec ping on average would indicate, IMO, that those servers are under load.
Cheers,
-- RLJ
That's not quite what happened with AT&T (Score:5, Informative)
They had to eventually take their old version, give it a new, higher number, and then compile and release that. So that that 'feature' once again became a feature and not a bug. Many lessons to be learned.
Re:That's not quite what happened with AT&T (Score:2)
Specifically, the break statement was exercised when (iirc) over 50 incoming calls were being handled in the same second at the same time that the switch recieved a "I'm back!" message.
Once one legitimately faulted out, it tripped the code in one other, which upped the load on it's neighbors, and when it came back, it took three out...
This is covered very nicely in the Hacker Crackdown, which I think was placed online in an act of surprising generosity. I still prefer the paper though.
Re:Well yes, but... (Score:1)
Re:Well yes, but... (Score:2)
But I can think of a number of ways to help with this.
For example, set up a new set of root name servers. Make sure that the database is duplicated to those machines at more or less the same time as the original machines, and then pursade of of the big providers. AOL or someone to use those as their root name serves instead. They would get better service, having dedicated machines, and it would lessen the load on the existing servers.
Not an ideal solution, but I'm sure it would work to reduce the load for a while.
Re:Well yes, but... (Score:1)
I'd have to say because they're cheap bastards and if they were to make such a guarentee, their insurance rates would spike.
Run their own? (Score:2, Interesting)
What are the obstacles to Nominet, say, running their own root server.
They must already have bandwidth and physical security
More redundency, especially outside the US, can only be a good thing, right?
Re:Run their own? (Score:4, Informative)
Re:Run their own? (Score:1, Interesting)
Configuring every DNS on the planet to know about it ...
Re:Run their own? (Score:1)
How many of these are there?
Re:Run their own? (Score:2)
No.
Crash course in DNS: In a typical setup, your machine asks the DNS server at your ISP (this is called a "recursive resolver" and it's really not part of the DNS hierarchy) where www.foo.com is. Then it does the following (assuming all cache misses -- in real life, not all these connections would really happen most of the time): it has a list of root DNS servers stored in a config file somewhere. It picks a root DNS server and asks where the .com DNS server is. The root tells your ISP's DNS server
where .com is, and then your ISP's DNS server asks the .com DNS server
where foo.com is, and then when it gets the answer to that, it asks
the DNS server for foo.com where www.foo.com is. Then your ISP's DNS
server passes the result to you. (Glossing over a few details.)
Queries start at the root, and work their way down the hierarchy.
Re:Run their own? (Score:1)
And the DNS servers don't need to be changed at the same time
Do you understand what root means? (Score:3)
Pretty big change. There have been companies that set up new top level extensions (impatient with ICANN and who can blame them) and sell those addresses, but for visitors to get to those sites the visitors need to have the dns settings in their computer modified. And if ICANN eventually rolls out the new extension (and I think there is one extension that this applies to, can anyone remember? biz maybe?) you could then have two company.biz sites, and which one the browser goes to depends on which root it's querying. Man, what a mess.
Re:Do you understand what root means? (Score:1)
What does it matter if some DNS servers think there are 13 root nameservers and some think there are 14? This isn't fragmenting anything - just that some of the servers the next level down have more choices than others.
company.biz will always be the same, because all 14 root nameservers have the same information.
Re:Do you understand what root means? (Score:2, Informative)
If you add more root name servers, when name servers look up the list of root name servers (via something called a system query) you truncate the DNS message, and then those name servers retry over TCP and all hell breaks loose.
That said, two of the existing roots (j and l) are temporarily housed at ISI and VeriSign, which already have roots. Those two really need to be deployed to parts of the Internet that need them.
maybe ... (Score:2)
Kidding ... sorry. :-)
-- RLJ
Re:Do you understand what root means? (Score:1)
All DNS does is translate human friendly names to IP addresses. If the root servers died tomorrow, hitting slashdot's IP address would still work.
Granted, you'd have a tough time finding the IP if the roots were really done, but the failure of DNS has nothing to do with how the "networks" talk to each other.
What you're really talking about is the unified domain name space, having most of the users of the Net being able to resolve names certainly does keep the Net moving.
However, the ICANN roots (and their name space) are not the only ones in town. There are currently several different groups of alternate Network Information Centers (NICs) such as OpenNIC [unrated.net]. Using them is fairly trival for any admin; if enough of us start using them, ICANN no longer has power.
Individual users don't need to modify their DNS setups, they should be pointing to their ISP's name servers anyway; saves both bandwidth and lookups.
ICANN is about to lose big... (Score:3, Insightful)
I suspect that China will be the first to set up its own root DNS servers and start issuing non-ICANN-approved domain names, probably in competition with ICANN and Versign. Other's will soon follow. Soon every big ISP both in the U.S. will see the need to have its own root DNS server. Of course there will be some cooperation required between the different DNS roots if their customers are going to be happy. Hopefully, this new cooperation will end the monopoly ICANN has over the administration of the Internet, leaving unsportsman like players like Versign standing out in left field, wondering why nobody is tossing them the ball anymore.
Re:ICANN is about to lose big... (Score:2)
Re:ICANN is about to lose big... (Score:1)
Yeah, after the last story about DNS [slashdot.org], I started considering switching to one of the alternate DNS systems. Hmmm...I wonder if uber.geek is taken?
The claim that they're worried about lawsuits seems silly to me (at least to some degree). They can at least try to increase their redundancy, security and stability--then just put some blurb in their agreements that they don't guarantee stability (just like many modern corporations do). The article said they weren't even paying the companies/organizations that ran the root level servers! Isn't this a big part of what they are supposed to be doing? What kind of crap is that?
I have to wonder if this is just some ploy so that the players can stuff their wallets with ICANN money...
Re:ICANN is about to lose big... (Score:2, Insightful)
The catch for ICANN is it needs legitimacy to enforce policy both in the U.S. and especially abroad, but it can't gain global recognition and respect without enforcing policy and taking responsibility.
From the article:
Nigel Roberts, head of the Channel Island domain registry and a member of Icann's country code committee, said the row was leading people to question just what Icann was for."The issue is not the amount of money," he said. "It is about the role that Icann has."
I think ICANN could help resolve this by giving guarantees to take an active role in use and abuse of the root servers. They could more closely track and monitor root server usage, make recommendations and requests to providers on where to put root servers to improve DNS efficiency and reliability. ICANN could also publish data on who's root servers are performing well and who's are not, shaming poor providers who create bottlenecks into better service. If ICANN performs those duties well and is responsive to concerns like those in the EU, it will become a more effective body.
Regards,
Reid
Re:ICANN is about to lose big... (Score:1)
Didn't AOL try locking their userbase into AOL chatroom-sandbox and it worked for a while?
Potential profit here ... (Score:2, Insightful)
The benefit to the end user is that one could subscribe to a completely Disne-fied root that would have only family-friendly sites, whereas another server would have all those wacky pr0n sites you could ask for. Somebody would probably even have a free root server out there based on his/her special interest groups.
Heck, you could even charge for translating addresses to other systems. No need to worry about foreign DNS servers - if they don't pay up, they don't get access to your root.
Some people would still get around the whole thing by just typing in the octet directly, but that would be such a small percentage that it wouldn't even matter.
I can't wait! (Score:2)
That will give me about 95% more bandwidth.
Re:ICANN is about to lose big... (Score:3, Informative)
First? It is already several years too late for China to somehow be first. Alternate roots have been around for a long time. I use one, you can too. [unrated.net]
Re:ICANN is about to lose big... (Score:2)
Of course, China prolly has an advantage that it can force all ISPs to point to its servers...
Re:I doubt that. (Score:1)
China will not take over existing root servers. But if they establish their own root server for use within China, then that will actually further their goal of controlling Chinese citizens' access to information. Even if no one else in the world uses the China root server, it could still help to control the Chinese population, which is their overall goal.
Re:I doubt that. (Score:1)
That's just it... they can make it law that everyone in China has to use the Chinese root servers. Those who dissent are jailed.
No, China will not take over root servers. Some other nation might, maybe. But, definitely not China.
I disagree. While taking over root servers may not be the most successful policy, it is certainly a viable option. It's not an issue of whether or not China can setup root servers, China definitely has people who can, and would be willing to, setup and maintain root servers. The bigger issue is their motivation. Given a desire to prevent people from posting and reading dissenting opinions, the Chinese government may perceive root servers as a very viable option.
Every time anyone looks for a webpage??? (Score:3, Insightful)
Surely this cannot be true... Don't DNS servers cache address resolutions?
Re:Every time anyone looks for a webpage??? (Score:2)
There are also people who maintain there own DNS system, albeit smaller and personalized.
But in general that is true.
Re:Every time anyone looks for a webpage??? (Score:1)
Re:Every time anyone looks for a webpage??? (Score:2)
Re:Every time anyone looks for a webpage??? (Score:2)
Re:Every time anyone looks for a webpage??? (Score:1)
Re:Every time anyone looks for a webpage??? (Score:1)
Plus, I'm sure that at least 10% of normal web browsing comes straight from the user's cache on their hard drive, so the internet isn't accessed at all.
Once every 3 hours, I think (Score:2)
Phillip.
Re:Once every 3 hours, I think (Score:3, Interesting)
So, if DNS goes down at 10:00pm on a Friday, people (who have the addresses cached) can still get to the machines until the hung-over networking crew logs in to check things out the next morning.
They'd bump the TTL way down, on the other hand, when major machine moves were planned.
Sounds like a standard SLA is required (Score:2)
It sounds as if all that's required is a standard Service Level Agreement. The kind of thing that's standard through most big corporates, and has a clause along the lines of "we guarantee 99.5% uptime, if service drops below this we pay £x.xx per quarter percent below.".
It seems that it's the refusal to provide something like this, rather than technical worries, that are underlying this dispute.
Cheers,
Ian
ICANN doesn't OWN the root servers (Score:3, Insightful)
If ICANN can't legally hold accountable the people running the root servers, then there's no way they'd provide any guarantees to anyone. That much makes sense.
Furthermore, the root servers (again, from the article, don't flame me if I'm missing a nuance or two) don't really DO much. They just tell you where to go to get info for each of the top-level domains. Not exactly a whole lot to running one of these other than keeping it from crashing.
My question, though, is why is anyone worried about a root server crashing? There are 13 of 'em. Wouldn't your DNS server ask someone else if the "preferred" root server suddenly went Tango Uniform? Are there backup root servers out there to jump in? Ways to route around the damage, as it were?
What I still find amazing is that ICANN hasn't managed to take full physical and financial control of all the root servers. When I was in school, I remember thinking it was cool that we had one of the root servers (terp) in my building. It was amazing to see how a loose group of unrelated institutions had somehow set up a reliable, workable, DNS system.
In fact, it sounds like this is still the case, somewhat. Do these root server operators have ANY contractual controls on what they do? If not, then why the hell can't we just get THEM to add new top level domains? Screw ICANN. The servers don't belong to them, they belong to the people running 'em. As long as the guys running the roots don't point
And, if they were to do this, could ICANN even stop them? They'd have to repoint all the root.hints files across the entire globe, wouldn't they?
Or is this the kind of Chaos that the EU is afraid of?
Re:ICANN doesn't OWN the root servers (Score:2)
What a root server doesn't isn't very hard. What is hard is keeping the damm thing running. They a high load (every DNS server in the world hits once once a day for each TLD), they get all sorts of script kiddies hitting them, and because of their profile, it's very hard to make changes.
Re:ICANN doesn't OWN the root servers (Score:1)
If a root server goes down, there are lots of redundant alternatives. However, the posability and damage of Domain name hijacking is much more serious... This is especially true since ICANN does not even operate the root servers!!. What's stopping one of the companies that operate root servers from suddenly deciding to take over the .uk top level domain? There is probably no law or contract stopping them from doing so.
Re:ICANN doesn't OWN the root servers (Score:1)
You're probably right. NSI hijacks people's domains all the time and doens't get in trouble. Makes me wonder if there actually is any law that prevents it. If they hijacked a TLD, I'm sure everyone would make a law against it real quick though.
What's new? (Score:2, Interesting)
We need a new DNS system (Score:1)
Why should ICANN promise to deliver something that they know they are unable to?
What we really need is to start over with a new specification for domain names that reflects the needs of the current Internet - a systerm that can provide the security and reliability that we now depend on.
Re:We need a new DNS system (Score:1)
Don't think in human years.... try thinking in geological time. You know... eons, epochs, and eras.
I don't know about anyone else, but... (Score:1)
It's all about money, pure and simple.
Re:I don't know about anyone else, but... (Score:2, Insightful)
If your company was administering a ccTLD and ICANN comes knocking at your door for money when they can't make any assurances of your ccTLD being served to the rest of the world, why should you pay them?
To make an analogy, ICANN is to the Internet like the UN is to an international government; they are both generally ineffective but continuously demanding an ever increasing sum of money to be able to join the party.
The simple fact is that ICANN can't... (make any assurances) because they ultimately can't step in and takeover the root servers. Otherwise, they'll find themselves in a bigger controversy. Mind you, ICANN is no stranger to controversy.
Money money money (Score:1)
Looks like ICANN just want the money without offering a guarantee of service.
Any reason why the top level domain registers can't take over ICANNs role of handling root level DNS requests?
They probably will do something (Score:1)
Re:They probably will do something (Score:2)
How much of the ICANN budget does JDRP get? (Score:1)
ICANN should be less worried about the CCtlds and focus on their own organization! The total personnel costs for ICANN are projected at $2.217 million dollars! I would like to know what EXACTLY the staff members do to deserve this type of money? ICANN is the biggest bunch of hypocrites to come along since the US Congress!
root server RFC (Score:1)
A Geek's challenge: (Score:2)
I challenge us to slashdot it!
Re:A Geek's challenge: (Score:1)
So, that gives us 4.4K of data, plus presumably a little program to interpret it and send the results back.
So what do they do with the other 8 gigabytes?
Re:A Geek's challenge: (Score:2)
Re:A Geek's challenge: (Score:2)
Err no, none of the memory is used for sockets and none for threads.
DNS is a UDP protocol and there is no good reason to talk TCP to a root name server so those requests would be firewalled off to a different node.
As a UDP protocol DNS is stateless and there is not a good reason to use threads. Ungranted requests can be cached in the network interface drivers. At least that is the way the servers running BIND function. I have not read the Nominet code but I doubt it is different.
I don't know why Paul would have so much RAM on his box. The dotcom zone is many gigabytes but the root zone only has 200 records.
Re:Dummy (Score:2)
references, bits, and pieces. (Score:2, Informative)
http://www.isi.edu/in-notes/rfc2870.txt talks about the requirements for a root server. From this:
1.1 The Internet Corporation for Assigned Names and Numbers (ICANN)has become responsible for the operation of the root servers. The ICANN has appointed a Root Server System Advisory Committee (RSSAC) to give technical and operational advice to the ICANN board. The ICANN and the RSSAC look to the IETF to provide engineering standards.
As such, it looks like ICANN is the only organization that can take responsibility of the system.
section 2.3 estimates that 2/3rds of the servers could be taken out and functionality would be maintained.
The Internet Software Consortium runs F on BIND 8.2.3 (Hrmmn... their latest release is 8.3.0 and they've noted that 8.2.5 has a security bug, and the 9 series *is* out and at the 9.2 series, does anyone else find it disconcerting that they run 8.2.3?) Does anyone know of a list of who takes care of these root servers?
Re:references, bits, and pieces. (Score:1)
A) The group that keeps F is responsible for developing BIND.
B) this group released 8.3.0 because 8.2.5 had a security bug. F runs 8.2.3
Does that make more sense?
Re:references, bits, and pieces. (Score:1)
Actually, both 8.1.2 and 8.2.3 are Very stable and secure in the 8 series.
I personally run 8.1.2 on half of my servers (Slaves) as i dont need the newer features of 8.2 on them.
8.1.2 is also not effected by the holes introduced in the 8.2.2 series that existed up until i believe 8.2.2p5 (But dont quote me on that patch level)
8.2.3 was basicly a pollished version of this.
Any 8.2 released after optentially has bugs still, adn they did not fix them in the 8.2 tree as 9.x was pending so close.
I have no paid anymind or attention to the 9.x tree at all myself, and wont until it gets a tad more stable.
Additionally, there are still 4.x versions that are extreamly stable and secure and running over the internets backbones.
Just because the version is older doesnt mean it automaticly has bugs.
Some people either know/feel more comfortable with the 4.x zone files than they do with 8.x.
They should not be forced to upgrade if they dont want to.
Its the same with 8.x to 9.x.
Most of the changes are not security or stability anyways, only new features.
--Jon
Re:references, bits, and pieces. (Score:2)
It is very likely that the root node would run a stripped version of BIND. This is certainly done on some of the nodes.
Centralization is *not* the answer. (Score:3, Insightful)
This is true, to an extent. Different and widely spread organizations run the root name servers, using different OS's, hardware configurations, and network connectivity.
Concentrating and centralizing the root name servers would defeat the diversity that now exists. If one goes down, the others pick up the load. If there's a fatal hardware bug in one, it probably won't affect the servers running on different hardware. And, most of all, A single business or management failure will not disrupt root nameservice.Whoever in the EU (I suspect it's some ex-communist beaurocrat who loves centralized authority) thinks that things are bad now should read the RFC 2870, Root Name Server Operational Requirements [isi.edu] and get a clue.
Sigh.... (Score:1)
For those who do not know what OpenNIC is, here is their description:
Bring back IANA... (Score:2)
What a joke... (Score:3, Interesting)
Huh?
What did I miss? We all have to meet requirements, whether your a 5 nines shop (god help you) or not with respect to uptime and service availability. Why should ICANN be any different?
Cheers,
-- RLJ
No, things have changed. (Score:2)
Unless people get smart and dump M$, it's hard for anyone to gaurantee any service. It's kind of like planning to meet someone on Burbon Street for Mardi Grass, your voice will be lost in the noise. All the resources in the world won't protect you from irresponsible net usage.
By the way, 13 is 1.08333... dozen.
This seems like a mute point (Score:1)
2.3 At any time, each server MUST be able to handle a load of requests for root data which is three times the measured peak of such requests on the most loaded server in then current normal conditions. This is usually expressed in requests per second. This is intended to ensure continued operation of root services should two thirds of the servers be taken out of whether by intent, accident, or malice.
/quote/
I think that is the guarentee.
Some points to think about (Score:3, Interesting)
Given the nature of how DNS works, and how the root servers are run, how can ICANN guarantee anything? (it can't) If they do provide some sort of guarantee then haven't they added a financial incentive for someone to DOS the root servers?
The Europeans are asking for something that cannot be delivered (currently), and if they get it the chances increase that someone will DOS the servers for some financial gain. (i.e. your server went down, I now don't have to pay you x dollars). If I was ICANN I wouldn't want to sign an agreement. It may be time for ICANN to change the way it does business, and the "ad hoc" nature that the root servers are maintained may have to change. DNS the protocol itself needs to be very carefully looked at as well.
The root servers should be a co-op (Score:2)
Re:Socialised root servers? (Score:1)
Add 'no corporations nor governments' to your statement, and you have me %50 sold, just dont call them 'socialist' servers, there are people whose support would be needed in this idea, who, like you mention, think that socialism = comunism = red = bad, long bearded dude who smokes stinking cigars
Snap out of it people, its time to wake up
Re:Socialised root servers? (Score:2)
Ah, so you want to exclude this new system from controlling
RFC on Root Servers (Score:1)
So... (Score:1)
ICANN do no wrong.
Too close for comfort (Score:1)
Bah. (Score:1)
When asked for comment, a representative stated, "What? Am I my server's keeper?"
(note misspelling of ICANN in the article)
Always puzzled me. (Score:2)
But I could be completely wrong because I so think, that DNS records should also include rudimentry routing info that helps the rest of the world find that last hop to my network since my ISP will not route for me. And I also think that DNS should have the ability to have a PORT record so when doing a DNS lookup the person looking me up can be directed to service ports within my IP so www.foo.com can live on port 8090 for instance because cable modem companies sometimes block port 80. That way when www.foo.com gets looked up the client not only gets the IP, but the port on the server to connect too, so users don't have to have stupid IPs like http://www.foo.com:8090, DNS takes care of passing the 8090 as part of the lookup reply.
I am working on the RFC for this since there doesn't seem to be one.
Re:Always puzzled me. (Score:2)
Re:Always puzzled me. (Score:2)
Perhaps a closer read the next time, instead of just skimming for flamebait.
To reiterate and expand...If a user such as one on a cable modem wants to have to have a WEB site, and the ISP blocks the Port 80, if DNS had the ability to pass port information with the DNS reply then the user could have www.foo.com, as the URL leading to the site instead of having www.foo.com:8090.
Another example, I have one IP I want two sites, but they live on different boxes. The same rational applies, one could live directly on 80, and with DNS carrying the Port # the the other could live on 8090, and they can both have simple names www.foo.com, www.bar.com,(I am actually running into this problem now, as I have a domain already, and my girlfriend would like to have one as well)
I am the DNS admin more several Internet domains, and have been for 5+ years in a professional capacity. I have been on the internet in one cpacity or another for 10+ years. I remember a time before the web didn't even exist.
Re:Always puzzled me. (Score:2)
I'm still wondering what ports have to do with DNS, and why you'd want port information attached to your DNS if you're running more than one service. Even assuming that this wasn't doable in other ways that didn't require major changes to DNS, 99% of the time services will be running on thier standard ports on legit servers any. (BTW, your ISP blocks port 80 cause running servers is against the AUP. That means it's not a legit server)
Re:Always puzzled me. (Score:2)
BTW, I don't believe that ISP's should be able to limit what you do with the Bandwidth, call my desire to help people who have their port 80 blocked civil disobidiance...
I am with this conversation because obviously you people have so limited a view of things that you can't open your minds enough to understand what it is that I am trying to accomplish.
Re:Always puzzled me. (Score:2)
Re:Always puzzled me. (Score:2, Informative)
The most basic factor is that the DNS specification imposes an obsolete 512 byte limit on the size of UDP DNS packets. (DNS can run on TCP but the overhead is much higher than with UDP.
Since reply packets often contain many resource records, and DNS names can be up to 255 bytes each, you can see that one can brew up server names that would strongly press that 512 byte limit even with two servers. Fortunately, server names are usually not all that long.
DNS name compression comes into play to help, and that situation has improved since most root servers now support root-servers.net as the right hand part of their names.
Internationalization of domain names under the ACE rules coming out of the IETF will work the other way - internationalized server names will tend to be longer than than the a.root-servers.net form that we see today.
Now, just because we see one NS record in a list of servers doesn't mean that there is only one computer there - or even that it is in one place. Many servers are actually clusters that are hiding behind load balancers.
And with IP "anycast" technology (essentially a way of establishing multiple instances of the same address block by using localized more specific route announcements) we can have as many servers as we want at the same apparent address but located in widely scattered locations around the world. The
Oh, by-the-way, don't fall into the belief that the names/addresses listed in the "hints" file are the root - those addresses merely serve as a way to find a single root server. That server, in turn, will provide the actual set of root servers. That's why the hints file is called "hints" - it's just there to get the ball rolling.
Re:Always puzzled me. (Score:2, Informative)
The countervailing force is EDNS0, which will allow 4096 byte UDP-based DNS messages. And BIND 8.3.0, recently released, supports EDNS0. f's already running it. Once 8.3.0 is fully deployed on the roots, I think additional root name servers are just a quick hack away:
- System query without EDNS0: You get 13 root name servers
- System query with EDNS0: You get more
Re:Always puzzled me. (Score:2)
Been there, done that. It is called the SRV record and it works in the same way as the email MX record.
Not supported inany of the browsers yet, but is used extensively in W2K for other purposes.
Re:Always puzzled me. (Score:2)
Re:Always puzzled me. (Score:2)
Re:Always puzzled me. (Score:2, Informative)
I was half mistaken.
RT is not the record of interest for ports, SRV is.
This is from chapter 15.7.6
Quoting the book (and all credits due)
~~~~~
The experimental SRV record, introduced in RFC 2052, is a general mechanism for locating services. SRV also provides powerful features that allow domain administrators to distribute load and provide backup services, similar to the MX record.
A unique aspect of the SRV record is the format of the domain name it's attached to. Like service-specific aliases, the domain name to which an SRV record is attached gives the name of the service sought, as well as the protocol it runs over, concatenated with a domain name. So, for example:
ftp.tcp.movie.edu
would represent the SRV records someone ftping to movie.edu should retrieve in order to find the movie.edu FTP servers, while:
http.tcp.www.movie.edu
represents the SRV records someone accessing the URL http://www.movie.edu/ should look up in order to find the www.movie.edu web servers.
~~~~~~~~~~~
Hope this helps
ICANN's problem is... (Score:2)
I know, I know, what a troll...but sometimes I get so fed up...
-h-
Wildly inaccurate (Score:2, Informative)
This is totally inaccurate. If you are searching for www.bbc.co.uk, your computer asks the local DNS cache (listed in /etc/resolv.conf, unless you have some retard OS). That cache then asks a root server for www.bbc.co.uk (if that information has not already been cached). This produces a referral to the .uk nameservers. The process continues for co.uk and bbc.co.uk as necessary. Note that it does not ask the closest root server, because the cache has no way to know what this is. BIND uses the "fastest" server (until it overloads from all the other BIND servers using this strategy); djbdns's dnscache picks one at random.
One way to avoid delays at the root servers is to run your own local root server, and periodically download the root zone. This [open-rsc.org] shows you how to do it using the ORSC root zone, but you can do it with the standard root as well. You can AXFR it from one of the root servers. Then you tell your local cache to use your local root as the root server.
My suggestions regarding DNS stability/security... (Score:2, Interesting)
[cavebear.com]
http://www.cavebear.com/rw/steps-to-protect-dns
Don't let the fact of 12 or 13 servers lul one into a sense of security - they are all fed data from the same source, and if that source is corrupted, then all the root servers will be corrupted. And that's not a hypothetical - the entire
Also, because all of the root servers run a nearly common code base, they are potentially vulnerable to a common weakness.
In addition, one need not bring down a server to take it off-line, an attacker need merely saturate the network in the vicinity of a target server so that no good traffic can get through. An even scarier notion is that of corruption of Internet routing so that packets flowing to DNS server addresses are forwarded out router interface null0.
Nominet, DENIC et.al. shouldn't complain (Score:3, Informative)
. .
If I read this correctly, the reason why the EU local registries don't have their own root servers, and hence control over service levels is a historical issue [isc.org].
Excerpting from the Internet Software Consortium's page, linked above - and please allow me to state that such a reference is anecdotal rather than given fact,
The "one in Europe" btw was NOT Nominet or another registrar, it was a guy working for LINX, the London INternet eXchange.
There's good reason for this, as late as the early 1990s, Europe was still thinking that X.500 was the way forward, and a large amount of resources from universities, telcos and local standards agencies was devoted to "interoperability" testing of X.500 directory services. What really happened was the standards lagged the implementations so badly that vendors and implementors went ahead and did their own thing, creating, as anyone who has dealt with X.500, a nightmare for inter -vendor interoperability. That created the space in which the InterNet and DNS / BIND could flourish. FWIW, LDAP is a (nor precisely, so please don't flame me, too large a subject for absolute accuracy here) derivative of X.400, itself a cut down form of X.500. Novell's eDirectory, which runs some of the largest sites (CNN.com, AOL messenger services) is itself a souped up LDAP implementation.
You can find a brief overview of X.500 and what the "authorities" in Europe were up to as late as 1990 and beyond in this history of X.500 [salford.ac.uk]
I'm British born myself, but this all seems to me to be Euro - Whining. Particularly the UK's Nominet making an issue of this is absolutely BS. Nominet has, IMO, very sharp practises. If you "buy" a domain in the UK (domain.co.uk) via an ISP, Nominet maintains a "tag" linking your domain to the "provding" ISP, until another ISP takes it over. Domains _never_ go back into circulation when they expire. Nominet refuses, on the whole, unless you threaten or cajoule them with considerable effort, to "release" your domain because it states it will not get involved in contractual disputes between you and your ISP. Most UK ISPs make contracts which lock you in to your services and charge a considerable and hefty severance fee, usually buried in the small print. You _can_ get a "Neutral Tag" applied to a UK domain, if you pay GBP £80 for two years, which fee goes back to the ISPs who are members of Nominet, which is a for profit company, limited by guarantee, a rare form of UK company which offers very lax statutory reporting. Even though you _can_ do all this, I've had several clients now who've complained to Nominet, e.g. when their ISP is TU and no longer provides service, and Nominet tells them anyway that they can only deal with an ISP who is a member of Nominet. Obviously that's BS. But you can't register a domain in the UK for
Sorry for that rant against Nominet, but it's Crocodile Tears time again and minus several million points for the Brits, as per usual.
Please follow the links above, investigate yourself . . .
Re:Nominet, DENIC et.al. shouldn't complain (Score:2)
Oh, for heaven's sake!
Anyone can be a Nominet tag holder [www.nic.uk]. I'm a tag holder myself. You don't have to be an ISP. You don't have to run your own DNS. If you want complete control over your domain, just register your own tag.
Some issues (Score:4, Insightful)
Reassigning a root server address is hard because the operator likely has other machines in the address block whose numbers would also have to change.
The EU concern is not irrational, it is pretty wierd that the root zone is essentially a volunteer effort given that the costs are not negligible and the responsibility immense.
Against this however there is a major political issue at stake. The root operators are in effect the arbiters of the DNS. If ICANN gets too big for its boots they are a check on it.
The other issue is that there are very few companies that could credibly manage the root zone on a contractual basis. It is one thing to run a server on a volunteer basis, quite another to provide a service guarantee.
One thing that is in the pipe that may well change some of the concerns, in particular anycast addressing which allows multiple servers to sit on the same IP address. The packets are routed to the 'nearest' machine. That will allow the deploment of additional root servers. It will also address some of the denial of service concerns.