AsI think as we all know the deficiency is the design of the DNS system overall. No disrespect to anybody, but lots of companies make money off of the design deficiencies and try to position themselves as offering 'value add services' or something similar. Basically they make money because the inherent design of the DNS system is the antithesis of being able to deliver information on a best effort basis. Entire 'value add' economic ecosystems are created with these kinds of things and once it is done, it is extremely difficult to un-do. If the endpoint is not available or is unreliable, this is all well understood and 100% captured in the modern implementations of the Internet with it be OSI or TCP/IP and even with numerous extensions from there. The fundamental cause and source of failure for these kinds of attacks comes from the the way the DNS (and lets not even get into 'valid' SSL certs) is designed. It is fundamentally flawed. I am sure there were plenty of political reasons for it to have ended up this way instead of being done in a more robust fashion? For all the gripes and complaints - all I see is complaints of the symptoms and nobody calling out the original cause of the disease? On Mar 27, 2013, at 6:47 AM, William Herrin <bill@herrin.us> wrote:
On Tue, Mar 26, 2013 at 10:07 PM, Tom Paseka <tom@cloudflare.com> wrote:
Authoritative DNS servers need to implement rate limiting. (a client shouldn't query you twice for the same thing within its TTL).
Right now that's a complaint for the mainstream software authors, not for the system operators. When the version of Bind in Debian Stable implements this feature, I'll surely turn it on.
Regards, Bill Herrin
-- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004