From: postel@ISI.EDU From: vjs@mica.denver.sgi.com (Vernon Schryver)
Perhaps TCP's listen queue should use random early drop (RED), a technique used by routers to prevent any one source from monopolizing a queue. See http://www-nrg.ee.lbl.gov/floyd/abstracts.html#FJ93 or rfc1254. ...
As I figure it, as long as the length of the queue is longer than RTT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ of the real telnet client times the rate of bogus SYNs, the real clients have an excellent probability of getting through on their first attempt. For example, at 1200 bogus SYNs/sec and the IRIX 6.3 telnet listen queue of 383, there should be no trouble with peers with RTT up to about 300 milliseconds. I've tested with a telnet client 250 milliseconds away while simultaneously bombing the machine from nearby with ~1200 SYNs/sec, and see no telnet TCP retransmissions.
As I understand it is the decision based on truncation long distance clients to fit in host resources under attack. And the more SYNs flood volume shrink more the service area. But why we use ten seconds in TCP as max RTT instead of hundreds milliseconds ? (see rfc1122) - Leonid Yegoshin, LY22