On Tue, Mar 13, 2001 at 04:14:50PM -0800, Mike Batchelor wrote:
That one root must be supported by a set ^zone of coordinated root servers administered by a unique naming authority.
Here is where I disagree.
Put simply, deploying multiple public DNS roots would raise a very strong possibility that users of different ISPs who click on the same link on a web page could end up at different destinations, against the will of the web page designers.
This is a problem that can be solved. The author of this document wants you to believe that it cannot be solved. That is his agenda, and it drives his foregone conclusion to exactly the place he wants it to go. The deck is stacked, I tell you! No argument for why it is unsolveable is even presented. The author takes it for granted that everyone agrees with him. You are just expected to know that it can't be solved!
It's hardly a stretch to figure out. Multiple root zones (in the true sense, not the pretend sense you suggest below) will arguably yield conflicting information over time, for reasons -- social, capitalistic, and otherwise -- I (or the author) shouldn't have to go into. If each root zone is unique (and they would have to be, else they would be coordinated and therefore not "multiple root zones"), there is nothing to stop one root zone from adding a {TLD,SLD} which already exists in another.
making for you? It's not a "value judgment" that using multiple roots with DNS results in inconsistencies, it's *the way DNS works*.
Yes it is a value judgement. He has determined that the problem is insoluble. It isn't. And we don't have to abandon DNS as the nameservice protocol for the public name space. All that needs to be changed is how each recursing DNS client cache gets its glue for the root. Are we incapable of coming up with an out-of-band method for distributing a 60K text file that scales well? I think not.
Hmm. A "60K text file that scales well" seems oxymoronic to me. It either scales, or it's 60K. :) Forgive the cheap shot there, but there's a point to it: If client caches have to get glue from/for as many different sources as feel like creating TLDs, that text file won't be 60K for long. There's no reason it couldn't end up being 60M eventually. Of course, a hierarchical glue system could be established -- oh wait, that would be coordinated. This is what I'm referring to above about pretend multiple root zones. Even if you put different pieces of the root zone on different servers, operated by different entities, the only way to ensure there are not conflicts is by coordinating the information contained in each. And if you're doing that, it's still a singular root zone, just distributed. And even if the coordination is done by those different people who operate their different servers in different organizations, those people make up a "unique naming authority". Who/where/what is the "trusted" source of the glue? (And not in a political sense, in a technical sense. Where do I point my client cache to get said glue?) No matter how much you want to distribute elements of the root zone, if conflicts must be avoided (as they must in this case) then there has to be a final word from somewhere to eliminate them.
And sometime in May, we'll have the complaints that IP addresses are political because they only allow 256 values per octet, and a class-action lawsuit is planned for the number 257, 258, -3, and all the fractions.
This is a matter of mathematics, not politics. How to get root glue to all clients that need it is a technical topic. Who should be the distributor of that glue is a political topic. This is the crux of the matter.
So, since 2826 never states who should be the distributor, it's not engaging the political topic in question... -c