On Sun, 19 Apr 1998 jlixfeld@idirect.ca wrote:
You could always "deny icmp any aaa.bbb.ccc.ddd www.ccc.nnn.mmm log" on
Using "deny icmp" as anything other than an extremely temporary measure while you (or your customers) are actively under DoS attack and focused only on the affected hosts/nets (the systems under attack and/or the smurf amplifiers if it is a smurf attack) is downright irresponsible. Even then, it will cause problems but they may be less than those caused by the DoS attack. Most ICMP traffic is necessary to the correct operation of the net. Filtering ICMP not only breaks ping, it breaks path mtu discovery which can cause much grief which is hard for the people affected to diagnose. Breaking PMTU discovery has the effect that connections between any two hosts that have an MTU greater than that of the smallest MTU hop on the path will not be able to communicate. Packets will be dropped, consistently enough to prevent communications even though some small part of each flow will frequently get through (generally the same part). This is a pattern (packet size) sensitive packet loss not a random one so TCP retransmission does not recover. For SMTP, the HELO, MAIL, RCPT dialog will happen and then the connection will hang on the message DATA (unless the message is very short) tying up the servers at both ends until they timeout. Normally all attempts to resend the message will also fail. For HTTP, the browser will probably be able to do a "HEAD" (short response) but a "GET" will fail. The symptom to the user is that they are consistently unable to get through to certain web sites with the connection stalling at the same place each time. On very rare ocassions, I have seen a connection actually succeed to one of these unreachable servers when it was sufficiently loaded that it transmitted the data in smaller chunks. If this happens to you, a workaround is to set the Max MTU to 576 on each of your clients (to fix outbound connections) and servers (to fix inbound ones). Setting the Mtu on your router does not seem to work (it might help in one direction (of data flow), by sending ICMP too bigs to your inside hosts at a threshold lower than the lost ICMP too bigs, but not the other direction in which both sets of "too bigs" get dropped. This does put a higher packet load on the backbone (more smaller packets have to be routed). The cause is usually because a router somewhere drops packets which are too large but have the DF (don't fragment) bit set without generating the required ICMP too big message (there is some defective hardware out there) or, more likely, that some cluelesss network operator filtered all ICMP traffic, usually as a naive attempt to protect against real or anticipated DoS attacks. This is made much worse by filtering software which does not allow you to filter specific ICMP types and (unfragmented) packet sizes. A much better way to handle most of the DoS attacks (except smurf) is to force fragment reassasembly on a router which is not sucseptible before forwarding the packets; this puts a significant load on the router so it is best done on a router/firewall close to the systems being protected (which is also desireable because it gives more protection). As an aside on the original topic, filtering on 0.0.0.255 mask 0.0.0.255 is also irresponsible and never should have been suggested here. The lame arguments that anyone who has a host in that range is asking for trouble are specious; just because they may be adversely affected by some clueless individual somewhere does not justify your being clueless as well. Yes, personally, I would avoid putting a (externally accessable) host at .255 because of the general clue deficit. --------------------------------------------------------------------------- --- Mark Whitis <whitis@dbd.com> WWW: http://www.dbd.com/~whitis/ --- --- 428-B Moseley Drive; Charlottesville, VA 22903 804-962-4268 --- ---------------------------------------------------------------------------