I'm very pleased to see how much enthusiasm there is for cooperating. Since we have identified the problem, and the reasonable technical solution (filtering) is apparent, we should now focus on discussing complications and a strategy for widespread implementation. For the purposes of this discussion, I'd like to remain as focused as possible: while I'd love to spread the clue about things such as turning off directed broadcasts, I feel that bringing accountability to IP is by far the most important issue. If we had accountability, all of the other DoS-related problems would be so much easier to respond to. The very first step, if you haven't done so already, is to push your own organization to implement ingress and egress filtering. This is NANOG, and there are enough clueful NANOs reading with the resources needed to accomplish this on a number of small- and medium-sized networks in the short term. With RFC 2827 in hand, use egress filters to make sure that your networks don't permit packets with spoofed source addresses from entering the Internet. If you have customers, as many (most? all?) of us do, use ingress filters to make sure that spoofed packets don't even enter your network. The key to solving this problem, though, is to push the large-sized networks - the Tier 1s and 2s of the world - into aggressively implementing ingress filtering on customer links. Most of these providers already have the necessary information to do so: BGP announcements from a customer are already restricted to customer address blocks, and known static routes are used for singly-homed customers not running BGP. If ingress filtering were to be implemented to the same degree that route advertisement filtering has been, this problem would be, for the most part, solved. The beauty of this is that filtering packets to eliminate spoofed source addresses comes down to the same exact restrictions as filtering routing advertisements. If a provider is able to filter BGP announcements for the greater good of the Internet, then that same provider is also able to filter packets by source address for the same reason. Not only do we get to use the same list of blocks for filtering, but we get to justify filtering with the same arguments that are already being used for other forms of filtering. This is not, as one poster suggested, ones and zeros without discretion. This is true within the network core, but at the customer edge, there are definite policy restrictions that are in force today, and that can be put in to force tomorrow. For the record, I haven't seen any evidence indicating that filtering at the edges would impact performance. All of the NSPs do have employees with enough technical know-how to get this done, and to get it done properly. Assuming that widespread ingress filtering could be implemented by these people at the NSP level, it wouldn't matter that the networks without clueful NANOG-reading operators (or without operators at all, for that matter) never bothered modifying a single router configuration. If an uncared-for network lacks ingress filtering but is connected to an NSP that filters all inbound customer traffic, individuals on the uncared-for network would only be able to spoof using addresses that are supposed to be on that network. If such a DoS attack were to be launched, then the source network would be known, even if the source party's identity would still be a mystery. This is of little consequence, though, because the pool of suspect source networks has been reduced from hundreds of thousands to one, and that's enough for law enforcement to launch a successful investigation. If the Internet were built around one central NSP, we'd be able to stop here, but since that's not the case, there are additional complications to deal with. Specifically, ingress filtering at the customer edges of a network implies that the network core is safe. As we all know, though, the network core is not safe, because it's not controlled by any single organization. It is impossible to implement filtering at peering points between NSPs, for example. The only solution is to get all of the NSPs to agree to implement filtering. If even one big-time operator refuses, then we cannot be sufficiently assured of accountability. One way I see to handle this issue requires a decree from one or two of the big NSPs requiring that all unfilterable connections (peering links and any other link so large as to make filtering infeasible or impossible) aggressively implement ingress filtering. (Come on, you big NSP types, I know you're reading, what do you think?) Customer links with more than a certain chunk of associated address space should also be held to the same standard of filtering. The only effective way of enforcing such a policy would be to unplug noncompliant networks. For those of you out there who followed me right up to this point, yes, I am suggesting forcing networks to behave. Others have pointed out, and I agree, there are too many networks which will not comply unless there's no alternative. This is a topic which can be expanded on; the enforcement issue especially is more related to policy. I also foresee complications due to the global nature of the Internet. Hopefully, these same principles could be extended beyond North America into other regions of the world. Thoughts? Mark