On Fri, Aug 17, 2012 at 04:32:45PM -0400, valdis.kletnieks@vt.edu wrote:
I think John's issue is that he's seeing those other queries *not* benefiting from the caching because they get pushed out by DNSBL queries that will likely not ever be used again. You don't want your cached entry for www.google.com to get pushed out by a lookup for a dialup line somewhere in Africa.
Oh, yes, I see. You're right, I misread it. But the proposed solution still seems wrong to me. If the entry for www.google.com gets invalidated by a new cache candidate that is never going to get used again, the cache is simply too small (or else it doesn't have enough traffic, and you shouldn't have a cache there at all). The cache needs to be big enough that it has a thrashy bit that is getting changed all the time. Those are the records that go into the cache and then die without being queried again. If the problem is that there's some other record in there that might be queried again, but that doesn't get queried often enough to keep it alive, then the additional cost of the recursive lookup is just not that big a deal. Best, A -- Andrew Sullivan Dyn Labs asullivan@dyn.com