It seems to me that the real issue in defending against an attack of this type of differentiating between legitimate traffic and zombie traffic.
Exactly. And while with today's DDoS attacks this is often not so hard, tomorrow's floods will be more carefully crafted so that there are no telltales that can be cheaply used to filter them out. Steve Bellovin and colleagues (me being one of them) have been working on a scheme called "Pushback", in which routers detect traffic aggregates that are burdening one of their links, and send pushback messages upstream to their peers responsible for the bulk of the traffic, asking them to rate-limit the aggregates. The key idea is that the upstream peers then monitor which of *their* upstream peers are responsible for the bulk of the traffic they're now rate-limiting, and send them pushback messages in turn, too. In this fashion, the pushback propagates out to the edge of the network (or the ISP's cloud, if that's the limit of pushback deployment). While there is still collateral damage in terms of any legitimate traffic that happens to enter from the same edge as the attack traffic will be subject to the same rate-limiting, *other* legit traffic that comes from other locations will escape the effects of the rate-limiting; so collateral damage is a lot less than if you just blindly rate-limit the aggregate. There are a number of issues concerning identifying aggregates, timing out the rate-limiting, etc. It's also not clear to what degree Pushback will work in the face of an extremely diffuse attack (see my next message). But it's at least a start on a solution that doesn't require uniform filtering of visibly distinct traffic. Pushback is described at: http://www.icir.org/pushback/ - Vern