In a message written on Wed, Mar 26, 2003 at 04:09:06AM -0500, Sean Donelan wrote:
For example, Al Jazeera had time-to-live set of their domain records set to 15 minutes, making them even more vulnerable to increasing the load on their systems. Of course, Al Jazeera had other problems too.
This is very much a double edged sword. If they decided to add more servers, or Akamize, or set up traditional mirrors a long TTL would have prevented a large number of people from using them as ISP's continued to serve up cached entries. Imagine not being able to get new servers properly loaded for a week if you followed more traditional TTL guidelines. For web content, I would recomend a TTL approximately equal to the longest you would expect a single user to view your web site. Worst case, if every user had to ask your DNS servers directly, would then be one query for every web visitor. Given that there will be caching at larger ISP's and the like it will actually be even less. If you can scale your web infrastructure to serve the pages to that number of users, scaling DNS to answer one query for each one of those users should be a trivial exercise. In addition to making it easier to change the service on the fly and have those changes take effect, it also makes it easier on smaller companies that cache content. How many small ISP's or corporate caching servers keep entries around for a week or more when one person spent 10 minutes at one site? The shorter TTL allows these boxes to get rid of the junk entries much faster. Depending on your web content I'd recommend 15 minute to 1 hour TTL values. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org