* Michael Dillon:
The volume of data cached would be so small in todays terms that it only needs a low-end 1U (or single blade) server to handle this.
The working set is larger than you think, I fear. I've been running something like this since summer 2004, and the gigabytes pile up rather quickly if you start with an empty database. If you restrict yourself to A records for plain SLDs and SLDs prefixed with "www.", the task becomes somewhat easier (because you get rid of all that PTR-related stuff, and the NS RRs take their share, too). Of course, you can squeeze quite a bit of RAM into one rack unit, so your comment probably isn't that far off in the end. 8-)
Since nothing like this exists on the market, the only way for ISPs to do this is to roll their own. Of course, it is likely that eventually someone will productize this and then you simply buy the box and plug it in. But for now, this is the type of thing that an ISP has to set up on their own.
Well, the data I collect is not authoritative enough for that purpose. My intent was to capture everything that could be served to some host on the network, while taking the possibility of broken resolvers into account. That's why I store the data without verifying its authenticity (which is generally very hard to do because DNS is not globally consistent). Plugging things directly into the caching resolver would give you access to its verification logic, but ISPs aren't really fond of doing this to their resolvers.