In message <CADJJukkadFbOYvWVan_8pdR=fxenqGRsyisiKBH6vpyDse6JrQ@mail.gmail.com> , Masood Ahmad Shah writes:
On Oct 21, 2016, at 6:35 PM, Eitan Adler <lists@eitanadler.com> wrote:
[...]
In practice TTLs tend to be ignored on the public internet. In past research I've been involved with browser[0] behavior was effectively random despite the TTL set.
[0] more specifically, the chain of DNS resolution and caching down to the browser.
Yes, but that it can be both better and worse than your TTLs does not mean that you can ignore properly working implementations.
If the other end device chain breaks you that's their fault and out of your control. If your own settings break you that's your fault.
+1 to what George wrote that we should make efforts to improve our part of the network. There are ISPs that ignore TTL settings and only update their cached records every two to three days or even more (particularly the smaller ones). OTOH, this results in your DNS data being inconsistent but it’s very common to cache DNS records at multiple levels. It's an effort that everyone needs to contribute to.
For TTL there is a tension between being able to update with new data and resiliance when servers are unreachable. For zone transfers we have 3 timers refesh, retry and expire to deal with this tension. If we were doing DNS from scratch there would be at least two ttl values one for freshness and one for don't use past. Additionally a lot of the need for small TTL's is because clients don't fail over to second addresses in a reasonable amount of time. There is no reason for this other than poorly designed clients. A client can failover using sub-second timers. We do this for Happy Eyeballs. This strategy is viable for ALL connection attempts. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org