RE: What were we saying about edge filtering?
On Thu, 4 Sep 2003, Matt Ploessel wrote:
With the exception of RPC1918 reserved address space (note the previous rootserver query problem), what amount of bogus sourced traffic is stopped by bogons on a major backbone? I would say _alot_ of DDoS traffic, however how hard is it for a DDoS client to know the bogon ip ranges and skip them? I'm a very strong supporter of the bogons and especially the bogons route servers without a doubt, but possibly null route RFC1918 traffic to loopbackX(no ip unreachable, ACL etc.) and the rest of the bogons to null0 just to so a general consensus/statistics of hits on major backbones can be compiled.
keep in mind its not destination addresses that are the problem here, BUT if it was, on an experiment (not a very smart one) we routed 0/1 to a lab system inside 701 once in 2001 (as I recall, so before nimda/code-red/blaster) and recieved +600kpps of garbage traffic as a result. Trying to acl/analyze/deal-with that flow was almost impossible... I'm not sure what you want to do with it today when our 'sinkhole' network is consistently handling +20kpps (5x previous) MORE of random garbage than 3 weeks ago, before blaster/nachi started to cause more pain :(
Christopher L. Morrow wrote:
keep in mind its not destination addresses that are the problem here, BUT
True, but there is RPF checks based on routing. anything routed to NULL0 is generally treated by such filters as an invalid route and will discard the packet of any source address from such a route. Setting up BGP peers internally and applying route policies to null route the routes received from the bogon peers would allow for easily invalidating the routes and dropping packets which supposably originate from them. I know this is easily done with vendor C. I suspect that the other vendors have implemented something very similar (heard J was easier than C). -Jack
On Sat, 6 Sep 2003, Jack Bates wrote:
I know this is easily done with vendor C. I suspect that the other vendors have implemented something very similar (heard J was easier than C).
"Neglecting gravity and friction, it is trivial to show that..." The issues with vendor C, J, N, L, R, P and A through Z have been repeatedly discussed and are available in the archives should anyone care to do the research. It is also a mistake to assume that all network architectures are the same as yours, or that your architecture would scale to solve the problems other network providers need to solve. The use of source address validation is expanding. Unfortunately SAV's effectiveness is declining due to the increase of trojan bots.
keep in mind its not destination addresses that are the problem here, BUT if it was, on an experiment (not a very smart one) we routed 0/1 to a lab system inside 701 once in 2001 (as I recall, so before nimda/code-red/blaster) and recieved +600kpps of garbage traffic as a result. Trying to acl/analyze/deal-with that flow was almost impossible... I'm not sure what you want to do with it today when our 'sinkhole' network is consistently handling +20kpps (5x previous) MORE of random garbage than 3 weeks ago, before blaster/nachi started to cause more pain :(
Just think, if you used loose uRPF, you wouldn't need to carry that traffic to your sinkhole network, even you win.
On Mon, 8 Sep 2003 bdragon@gweep.net wrote:
keep in mind its not destination addresses that are the problem here, BUT if it was, on an experiment (not a very smart one) we routed 0/1 to a lab system inside 701 once in 2001 (as I recall, so before nimda/code-red/blaster) and recieved +600kpps of garbage traffic as a result. Trying to acl/analyze/deal-with that flow was almost impossible... I'm not sure what you want to do with it today when our 'sinkhole' network is consistently handling +20kpps (5x previous) MORE of random garbage than 3 weeks ago, before blaster/nachi started to cause more pain :(
Just think, if you used loose uRPF, you wouldn't need to carry that traffic to your sinkhole network, even you win.
Don't confuse the source and destination. This traffic is packets with an unused DESTINATION address. loose uRPF has *NO* effect on the destination address. Which is greater in a typical backbone? Traffic with a bogon source, or traffic with a bogon destination entering the backbone?
Don't confuse the source and destination. This traffic is packets with an unused DESTINATION address.
Ok, you got me there. I do wonder, however, how much is responses to traffic that began life as an unused source. There is still the point that the catch-all route is causing more hauling of traffic than is necessary, and would prevent uRPF from being able to do its job (presumably so that back-scatter based backtracking could work, which would be unnecessary, if RPF were implemented.)
loose uRPF has *NO* effect on the destination address.
True enough, it has no direct effect. It might have indirect effect, however.
Which is greater in a typical backbone? Traffic with a bogon source, or traffic with a bogon destination entering the backbone?
This is probably a function of how many customers are default routed and how much the backbone filters. If we make the assumptions that the larger backbones have primarily BGP-speaking customers, and do relatively little filtering, then I'ld say it would be primarily bogus sources. As either the filtering or customers pointing default goes up, I'ld expect the weighting to even out or swing towards more bogus destinations. Thankfully, bogus destinations die when they reach the default-free zone. The same is not true of bogus sources.
participants (4)
-
bdragon@gweep.net
-
Christopher L. Morrow
-
Jack Bates
-
Sean Donelan