On Mon, Mar 12, 2012 at 8:01 PM, William Herrin <bill@herrin.us> wrote:
But suppose you had a TCP protocol that wasn't statically bound to the IP address by the application layer. Suppose each side of the connection referenced each other by name, TCP expected to spread packets across multiple local and remote addresses, and suppose TCP, down at layer 4, expected to generate calls to the DNS any time it wasn't sure what addresses it should be talking to.
DNS servers can withstand the update rate. And the prefix count is moot. DNS is a distributed database. It *already* easily withstands hundreds of millions of entries in the in-addr.arpa zone alone. And if the node gets even moderately good at predicting when it will lose availability for each network it connects to and/or when to ask the DNS again instead of continuing to try the known IP addresses you can get to where network drops are ordinarily lossless and only occasionally result in a few packet losses over the course of a a single-digit number of seconds.
Which would be just dandy for mobile IP applications.
DNS handles many of millions of records sure, but that's because it was designed with caching in mind. DNS changes are rarely done at the rapid I think you are suggesting except for those who can stand the brunt of 5 minute time to live values. I think it would be insane to try and set a TTL much lower then that, but that would seem to work counter to the idea of sub 10 second loss. If you cut down caching as significantly as I think this idea would suggest I would expect scaling will take a plunge. Also consider the significant increased load on DNS servers to handling the constant stream of dynamic DNS updates to make this possible, and that you have to find some reliable trust mechanism to handle these updates because with out that you just made man in the middle attacks a just a little bit easier. That said, I might be misunderstanding something. I would like to see that idea elaborated.