Recently, alex@yuriev.com (Alex Yuriev) wrote:
On Wed, 29 Oct 2003, Alex Yuriev wrote:
As the network operators, we move bits and that is what we should stick to moving.
We do not look into packets and see "oh look, this to me looks like an evil application traffic", and we should not do that. It should not be the goal of IS to enforce the policy for the traffic that passes through it. That type of enforcement should be left to ES.
Well, that is nice thery, but I'd like to see how you react to 2Gb DoS attack and if you really intend to put filters at the edge or would not prefer to do it at the entrance to your network. Slammer virus is just like DoS, that is why many are filtering it at the highiest possible level as well as at all points where traffic comes in from the customers.
Actually, no, it is not theory.
When you are slammed with N gigabits/sec of traffic hitting your network, if you do not have enough capacity to deal with the attack, no amount of filtering will help you, since by the time you apply a filter it is already too late - the incoming lines have no place for "non-evil" packets.
And how many people here operate non-oversubscribed networks? I mean completely non-oversubscribed end to end; every end customer link's worth of capacity is reserved through the network from the customer edge access point, to the aggregation routers, through the core routers and backbone links out to the peering points, down to the border routers, and out through the peering ports? I've worked at serveral different companies, and none of them have run truly non oversubscribed networks; the economics just aren't there to support doing that. So having 3 Gb of DoS traffic coming across a half dozen peering OC48s isn't that bad; but having it try to fit onto a pair of OC48s into the backbone that are already running at 40% capacity means you're SOL unless you filter some of that traffic out. And I've been in that situation more times than I'd like to remember, because you can't justify increasing capacity internally from a remote peering point into the backbone simply to be able to handle a possible DoS attack. Even if you _do_ upgrade capacity there, and you carry the extra 3Gb of traffic from your peering links through your core backbone, and off to your access device, you suddenly realize that the gig port on your access device is now hosed. You can then filter the attack traffic out on the device just upstream of the access box, but then you're carrying it through your core only to throw it away after using up backbone capacity; why not discard it sooner rather than later, if you're going to have to discard it anyhow?
Leave content filtering to the ES, and *force* ES to filter the content. Let IS be busy moving bits. Alex
I think you'll find very, very few networks can follow that model; the IS component almost invariably has some level of statistical aggregation of traffic occurring that forces packet discard to occur during heavy attack or worm activity. And under those circumstances, there is a strong preference to discard "bad" traffic rather than "good" traffic if at all possible. One technique we currently use for making those decisions is looking at the type of packets; are they 92 byte ICMP packets, are they TCP packets destined for port 1434, etc. I'd be curious to see what networks you know of where the IS component does *no* statistical aggregation of traffic whatsoever. :) Matt