At 08:39 AM 10/12/04 +0530, Suresh Ramasubramanian wrote:
Yes I know that multihoming customers must make sure packets going out to the internet over a link match the route advertised out that link .. but stupid multihoming implementations do tend to ensure that lots of people will yell loudly, and yell loudly enough for several tickets to be escalated well beyond tier 1 NOC support desks, for ISPs to kind of think twice before they put uRPF filters in ..
You might want to take a glance at RFC 3704, which looks at a number of the issues that have been raised in this thread, including the routing of traffic to appropriate enterprise egress points. In my heart of hearts, I would like enterprises to (as a default) match layer 2 and layer 3 addresses on the originating LAN, and quarantine-as-busted any machine that sends an address other than assigned on an interface. It seems that the few cases where a device legitimately sends multiple addresses are exception cases that can be handled separately. Handling it that close to the source solves the problem for everyone. Practically, that is difficult. If you think getting all of the service providers (who wind up having to fix ddos attacks, and pay for bandwidth and services related to ddos attacks) to manage networks well is difficult, consider the prospect of getting all the edge networks to do so... As simple solution is, as someone suggested, pose an idiot tax and bill the customers for doing stupid things. Egress traffic filtering in the enterprise is relatively simple for the average enterprise - it has at most a few prefixes and can write a simple ACL on its upstream router. It can use the ACL either to discard offending packets or to route them to the right egress. It is also relatively simple for the average enterprises' ISP: it knows what prefix(es) it agreed to accept traffic from and can write an ACL. It gets a little dicier when the customer is a lower tier ISP. In that case, there are potentially many prefixes, and they change more frequently. That is the argument for something like uRPF. No, it is not a "sure fix", but it handles that case more readily, both in the sense of being a fast lookup and in the sense of maintaining the table. The problem is, of course, in the asymmetry of routing - it has to be used with the brain engaged. From an ISP perspective, I would think that it would be of value to offer *not* ingress filtering (whether by ACL or by uRPF) as a service that a customer pays for. Steve Bellovin wrote an April Fool's note suggesting an "Evil Bit" (ftp://ftp.rfc-editor.org/in-notes/rfc3514.txt); I actually think that's not such a dumb idea if implemented as a "Not Evil" flag, using a DSCP or extending the RFC 3168 codes to include such, as Steve Crocker has been suggesting. Basically, a customer gets ingress filtered (by whatever means) and certain DSCP settings are treated as "someone not proven to have their act together". Should a ddos happen, such traffic is dumped first. But if the customer pays extra, their traffic is marked "not evil", protected by the above, and ingress filtering may be on or off according to the relevant agreement. The agreement would need to include a provision to the effect that once a ddos is traced in part to the customer, their traffic is marked as "evil" for a period of time afterwards. What the customer is paying for, if you will, is the ability to do their thing during a ddos in a remote part of the network, such as delivering a service to a remote peer. Address spoofing is just one part of the ddos problem; to nail ddos, we also need to police a variety of application patterns. One reason I like the above is that it gives us a handle on what traffic might possibly be "not evil" - someone has done something that demonstrates that it is from a better managed source.
On 12-okt-04, at 7:30, Fred Baker wrote:
From an ISP perspective, I would think that it would be of value to offer *not* ingress filtering (whether by ACL or by uRPF) as a service that a customer pays for.
So what is our collective position on ISPs filtering their peers? Both the position that this should be done as there are too many clueless peers and the position that it shouldn't as it breaks too much legitimate stuff (especially possible future stuff such as the multiaddress multihoming for IPv6) are reasonable. We need to agree on one or the other, though: half the net doing one and the other half doing the other won't make anyone happy.
Steve Bellovin wrote an April Fool's note suggesting an "Evil Bit" (ftp://ftp.rfc-editor.org/in-notes/rfc3514.txt); I actually think that's not such a dumb idea if implemented as a "Not Evil" flag, using a DSCP or extending the RFC 3168 codes to include such, as Steve Crocker has been suggesting. Basically, a customer gets ingress filtered (by whatever means) and certain DSCP settings are treated as "someone not proven to have their act together". Should a ddos happen, such traffic is dumped first. But if the customer pays extra, their traffic is marked "not evil", protected by the above, and ingress filtering may be on or off according to the relevant agreement.
I would much rather see a solution where ISPs rate limit their customers except for flows for which the customer can present a token that shows the recipient actually wants to receive the traffic, or the recipient gets to send a message to shut up the flow. This should solve the (D)DoS thing very nicely, although it does require both ends to cooperate and it requires customer facing stuff to look fairly deep into packets.
Address spoofing is just one part of the ddos problem; to nail ddos, we also need to police a variety of application patterns. One reason I like the above is that it gives us a handle on what traffic might possibly be "not evil" - someone has done something that demonstrates that it is from a better managed source.
Trusting the source when it says that its packets aren't evil might be sub-optimal. Evaluation of evilness is best left up to the receiver.
At 12:01 PM 10/13/04 +0200, Iljitsch van Beijnum wrote:
Trusting the source when it says that its packets aren't evil might be sub-optimal. Evaluation of evilness is best left up to the receiver.
Likely true. Next question is whether the receiver can really determine that in real time. For some things, yes, but for many things it is not as obvious to me.
At 12:01 PM 10/13/04 +0200, Iljitsch van Beijnum wrote:
Trusting the source when it says that its packets aren't evil might be sub-optimal. Evaluation of evilness is best left up to the receiver.
Likely true. Next question is whether the receiver can really determine that in real time. For some things, yes, but for many things it is not as obvious to me.
Correct me if I'm wrong here, but my interpretation of this suggestion was not that we should trust the source to mark packets but that we should trust our peers to mark packets. This seems to be something that is workable since most people have a manageable number of peers. Presumably each peer could mark the traffic based on what they know about their customer's network. If a customer follows all best practices, they mark it with the non-evil bit, otherwise not. If truly evil traffic is coming in from a peer, then one could apply mitigating actions only to traffic that is not marked non-evil, either blackholing it all or diverting it to a router that will perform complex filtering or heavily rate limiting it. It seems to me that really addressing DDOS, botnets, etc., requires network operators to agree on some sort of common coordinated action and using a network protocol to communicate about this coordinated action would be very useful. This doesn't mean that the non-evil bit is the only way, but the idea of network operators marking traffic in some way to indicate their level of confidence in its normality seems to be worth pursuing. It seems to be the natural progression of projects like the selection found at cymru.com. --Michael Dillon
On Thu, Oct 14, 2004 at 11:48:24AM +0100, Michael.Dillon@radianz.com wrote:
At 12:01 PM 10/13/04 +0200, Iljitsch van Beijnum wrote:
Trusting the source when it says that its packets aren't evil might be sub-optimal. Evaluation of evilness is best left up to the receiver.
Likely true. Next question is whether the receiver can really determine that in real time. For some things, yes, but for many things it is not as obvious to me.
Correct me if I'm wrong here, but my interpretation of this suggestion was not that we should trust the source to mark packets but that we should trust our peers to mark packets.
...
This doesn't mean that the non-evil bit is the only way, but the idea of network operators marking traffic in some way to indicate their level of confidence in its normality seems to be worth pursuing. It seems to be the natural progression of projects like the selection found at cymru.com.
--Michael Dillon
ah ... so you have no problems with me marking your packets anyway I choose, right? i suspect that a single tagging scheme will be too prone to abuse and that it will be important to have/allow the source to indicate its preferences. i am reminded of one ISP announcing 128.0.0.0/3 some time back based on the presumption that it could deliver any packet to the correct destination in that range. ... :) --bill
On 14-okt-04, at 0:17, Fred Baker wrote:
Trusting the source when it says that its packets aren't evil might be sub-optimal. Evaluation of evilness is best left up to the receiver.
Likely true. Next question is whether the receiver can really determine that in real time. For some things, yes, but for many things it is not as obvious to me.
It would be a very good start not having to receive that which you can identify as something you don't want.
participants (4)
-
bmanning@vacation.karoshi.com
-
Fred Baker
-
Iljitsch van Beijnum
-
Michael.Dillon@radianz.com