Sitefinder and DDoS
Let's assume for a moment that Verisign's wildcards and Sitefinder go back into operation. Let's also assume someone sets up a popular webpage with malware HTML causing it, perhaps with a time delay, to issue rapid GETs to deliberately nonexistent domains. What would be the effect on overall Internet traffic patterns if there were one Sitefinder site? (flashback to ARPANET node announcing it had zero cost to any route) How many Sitefinder nodes would we need to avoid massive single-point congestion? AFAIK, the issues of distribution of Sitefinder, and even a formal content distribution network, were not discussed. I asked some general questions that touched on this at the ICANN ISSC committee meeting, but I think they were interpreted as directed toward the reliability of the Sitefinder service in operation, rather than potential vulnerabilities it might create. I am NOT suggesting this simply as an argument against Sitefinder, and I'd like to see engineering analysis of how this vulnerability could be prevented.
Howard C. Berkowitz wrote:
I am NOT suggesting this simply as an argument against Sitefinder, and I'd like to see engineering analysis of how this vulnerability could be prevented.
With $100M annual revenue at stake, I would be willing to provide distributed solutions to this problem if you send me a reasonable fraction of that money. Pete
At 10:41 PM +0300 10/9/03, Petri Helenius wrote:
Howard C. Berkowitz wrote:
I am NOT suggesting this simply as an argument against Sitefinder, and I'd like to see engineering analysis of how this vulnerability could be prevented.
With $100M annual revenue at stake, I would be willing to provide distributed solutions to this problem if you send me a reasonable fraction of that money.
Pete
As long as I get a finder's fee! :-)
At 10:41 PM +0300 10/9/03, Petri Helenius wrote:
With $100M annual revenue at stake, I would be willing to provide distributed solutions to this problem if you send me a reasonable fraction of that money.
But can you do it without breaking the assumption that any lookup on *.TLD will always return the same value as badxxxdomain.TLD? -- Kee Hinckley http://www.messagefire.com/ Next Generation Spam Defense http://commons.somewhere.com/buzz/ Writings on Technology and Society I'm not sure which upsets me more: that people are so unwilling to accept responsibility for their own actions, or that they are so eager to regulate everyone else's.
Kee Hinckley wrote:
At 10:41 PM +0300 10/9/03, Petri Helenius wrote:
With $100M annual revenue at stake, I would be willing to provide distributed solutions to this problem if you send me a reasonable fraction of that money.
But can you do it without breaking the assumption that any lookup on *.TLD will always return the same value as badxxxdomain.TLD?
It would be doable, maybe not cover 100% of the cases, but if I would accept the offer to go over to the dark side, why I wouldn´t break that assumption to make your life more complicated? Pete
On Thu, 9 Oct 2003, Kee Hinckley wrote:
At 10:41 PM +0300 10/9/03, Petri Helenius wrote:
With $100M annual revenue at stake, I would be willing to provide distributed solutions to this problem if you send me a reasonable fraction of that money.
But can you do it without breaking the assumption that any lookup on *.TLD will always return the same value as badxxxdomain.TLD?
Well, the problem space is that a wildcard is involved. Since 1034 indicates that the answer for '*.something' is the same as 'otherwise-unmatched.something', I think this assumption is fairly safe. The assumption is not safe if the authoritative nameservers for the underlying zone are not performing according to the DNS specs; ie, they have synthesised answers that are not from a wildcard (which can be queried). --==-- Bruce.
But, that requirement simply says that if at x time you query *.something and otherwise-unmatched.something, you get the same result. It doesn't say that if you query at *.something at x time and otherwise-unmatched at x+5 time, you will get the same result. DNS servers can return different answers over time, and, expecting them not to change rapidly is an assumption not inherent in the protocol, much like the assumption that *.net and *.com would not get arbitrarily defined by the registry. While I would agree these are reasonable assumptions, I think we need to make some effort to get these assumptions codified into the protocol before someone else breaks them again. Owen --On Friday, October 10, 2003 9:41 AM +0200 Bruce Campbell <bc-nanog@vicious.dropbear.id.au> wrote:
On Thu, 9 Oct 2003, Kee Hinckley wrote:
At 10:41 PM +0300 10/9/03, Petri Helenius wrote:
With $100M annual revenue at stake, I would be willing to provide distributed solutions to this problem if you send me a reasonable fraction of that money.
But can you do it without breaking the assumption that any lookup on *.TLD will always return the same value as badxxxdomain.TLD?
Well, the problem space is that a wildcard is involved. Since 1034 indicates that the answer for '*.something' is the same as 'otherwise-unmatched.something', I think this assumption is fairly safe.
The assumption is not safe if the authoritative nameservers for the underlying zone are not performing according to the DNS specs; ie, they have synthesised answers that are not from a wildcard (which can be queried).
--==-- Bruce.
participants (5)
-
Bruce Campbell
-
Howard C. Berkowitz
-
Kee Hinckley
-
Owen DeLong
-
Petri Helenius