
On Mon, Mar 12, 2012 at 10:42 PM, Josh Hoppes <josh.hoppes@gmail.com> wrote:
On Mon, Mar 12, 2012 at 8:01 PM, William Herrin <bill@herrin.us> wrote:
Which would be just dandy for mobile IP applications.
DNS handles many of millions of records sure, but that's because it was designed with caching in mind. DNS changes are rarely done at the rapid I think you are suggesting except for those who can stand the brunt of 5 minute time to live values. I think it would be insane to try and set a TTL much lower then that, but that would seem to work counter to the idea of sub 10 second loss. If you cut down caching as significantly as I think this idea would suggest I would expect scaling will take a plunge.
Hi Josh, Actually, there was a study presented a few years ago. I think it was at a Fall NANOG. At any rate, a gentleman at a university decided to study the impact of adjusting the DNS TTL on the query count hitting his authoritative server. IIRC he tested ranges from 24 hours to 60 seconds. In my opinion he didn't control properly for browser DNS pinning (which would tend to suppress query count) but even with that taken into account, the increase in queries due to decreased TTLs was much less than you might expect.
Also consider the significant increased load on DNS servers to handling the constant stream of dynamic DNS updates to make this possible, and that you have to find some reliable trust mechanism to handle these updates because with out that you just made man in the middle attacks a just a little bit easier.
That's absolutely correct. We would see a ten-factor increase in load on the naming system and could see as much as a two order of magnitude increase in load. But not on the root -- that load increase is distributed almost exclusively to the leaves. And DNS has long since proven it can scale up many orders of magnitude more than that. By adding servers to be sure... but the DNS job parallelizes trivially and well. Route processing, like with BGP, doesn't. And you're right about implementing a trust mechanism suitable for such an architecture. There's quite a bit of cryptographic work already present in DNS updates but I frankly have no idea whether it would hold up here or whether something new would be required. If it can be reduced to "hostname and DNS password," and frankly I'd be shocked if it couldn't, then any problem should be readily solvable.
That said, I might be misunderstanding something. I would like to see that idea elaborated.
From your questions, it sounds like you're basically following the concept.
I sketched out the idea a couple years ago, working through some of the permutations. And the MPTCP working group has been chasing some of the concepts for a while too, though last I checked they'd fallen into one of the major architectural pitfalls of shim6, trying to bootstrap the address list instead of relying on a mapper. The main problem is that we "can't get there from here." No set of changes modest enough to not be another "IPv6 transition" gets the job done. We'd need to entrench smaller steps in the direction of such a protocol first. Like enhancing the sockets API with a variant of connect() which expects to take a host name and service name and return a connected protocol-agnostic socket. Today, just some under-the-hood calls to a non-blocking getaddrinfo and some parallelized connect()'s that happens to work better and be an easier choice than what most folks could write for themselves. But in the future, a socket connection call which receives all the knowledge that a multi-addressed protocol needs to get the job done without further changes to the application's code. Or, if I'm being fair about it, doing what the MPTCP folks are doing and then following up later with additional enhancements to call out to DNS from the TCP layer. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.comĀ bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004