Pretending for a moment that it was even possible to make such large scale changes and get them pushed into a large enough number of clients to matter, you're talking about meltdown at the recurser level, because it isn't just one connection per _computer_, but one connection per _resolver stub_ per _computer_ (which, on a UNIX machine, would tend to gravitate towards one connection per process), and this just turns into an insane number of sockets you have to manage.
Couldn't the resolver libraries be changed to not use multiple connections?
I think that the text I wrote clearly assumes that there IS only one connection per resolver instance. The problem is that hostname to IP lookup is pervasive in a modern UNIX system, and is probably pretty common on other platforms, too, so you have potentially hundreds or thousands of processes, each eating up additional system file descriptors for this purpose. I cannot think of any reason that init, getty, sh, cron, or a few other things on a busy system would need to use the resolver library - but that leaves a whole ton of things that can and do. Now, of course, you can /change/ how everything works. Stop holding open connections persistently, and a lot of the trouble is reduced. However, anyone who has done *any* work in the area of TCP services that are open to the public will be happy to stamp "Fraught With Peril" on this little project - and to understand why, I suggest you research all the work that has been put into defending services like http, irc, etc. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.