RE: [arin-announce] IPv4 Address Space (fwd)
The end result is that in the near future it will be much harder, or impossible for network operators to collect statistics based on traffic type or to filter particular types of traffic without being able to dig into the payload itself and see what type of traffic is passing.
Some people see this as a problem, some do not.
Isn't that the whole point of running a VPN connection? ***** "The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential, proprietary, and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from all computers.61"
In a message written on Wed, Oct 29, 2003 at 02:24:54PM -0600, Kuhtz, Christian wrote:
Isn't that the whole point of running a VPN connection?
Yes. What I'm saying is network operators are slowly forcing everyone to run _everything_ over a VPN like service. That's fine, but it makes network operators unable to act on the traffic at the same level they can today. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
In a message written on Wed, Oct 29, 2003 at 02:24:54PM -0600, Kuhtz, Chris= tian wrote:
Isn't that the whole point of running a VPN connection?
Yes. What I'm saying is network operators are slowly forcing everyone to run _everything_ over a VPN like service. That's fine, but it makes network operators unable to act on the traffic at the same level they can today.
Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
I think the other point that may be escaping some people, is that as more and more connections take on this VPN-like quality, as network operators we lose any visibility into the validity of the traffic itself. Imagine how much more painful SQL Slammer would have been, if all the traffic was encapsulated in port 80 between sites, and only hit port 1434 locally? We'd suddenly be unable to quickly filter out the worm traffic, and would instead see only that our port 80 traffic was now eating our network alive--and we certainly couldn't get away with filtering that out. We'd have no choice but to build our networks large enough to handle the largest sized worm outbreak, as we'd have no option but to carry the traffic blindly from end to end, having no way to even begin to consider how to differentiate valid traffic from invalid traffic. At least today, we can decide that 92 byte ICMP echo-request packets are invalid, and drop them; or that for the most part, packets destined to port 1434 should be discarded as quickly as possible. If everything, include worm outbreaks, gets tunneled on port 80, get ready to loosen the purse strings, because there's no alternative other than add more capacity. If I were more of a conspiracy theorist, I might think that the router vendors and long-haul fiber providers might be rubbing their hands gleefuly in the background, funnelling dollars into the VPN marketplace to fund more and more products that do exactly that...it would certainly be one way to ensure that the demand for larger pipes and faster routers stays high for the next decade or so, until OS vendors learn to secure their software better. ^_^;; Matt happy to still be able to block IPs/ports at his own discretion
I think the other point that may be escaping some people, is that as more and more connections take on this VPN-like quality, as network operators we lose any visibility into the validity of the traffic itself.
As the network operators, we move bits and that is what we should stick to moving. We do not look into packets and see "oh look, this to me looks like an evil application traffic", and we should not do that. It should not be the goal of IS to enforce the policy for the traffic that passes through it. That type of enforcement should be left to ES.
Imagine how much more painful SQL Slammer would have been, if all the traffic was encapsulated in port 80 between sites, and only hit port 1434 locally?
How do you know which traffic is good and which traffic is evil?
At least today, we can decide that 92 byte ICMP echo-request packets are invalid, and drop them; or that for the most part, packets destined to port 1434 should be discarded as quickly as possible.
How does you IS know that a _particular_ ES uses port 1434 for? Alex
On Wed, 29 Oct 2003, Alex Yuriev wrote:
As the network operators, we move bits and that is what we should stick to moving.
We do not look into packets and see "oh look, this to me looks like an evil application traffic", and we should not do that. It should not be the goal of IS to enforce the policy for the traffic that passes through it. That type of enforcement should be left to ES.
Well, that is nice thery, but I'd like to see how you react to 2Gb DoS attack and if you really intend to put filters at the edge or would not prefer to do it at the entrance to your network. Slammer virus is just like DoS, that is why many are filtering it at the highiest possible level as well as at all points where traffic comes in from the customers. -- William Leibzon Elan Networks william@elan.net
On Wed, 29 Oct 2003, Alex Yuriev wrote:
As the network operators, we move bits and that is what we should stick to moving.
We do not look into packets and see "oh look, this to me looks like an evil application traffic", and we should not do that. It should not be the goal of IS to enforce the policy for the traffic that passes through it. That type of enforcement should be left to ES.
Well, that is nice thery, but I'd like to see how you react to 2Gb DoS attack and if you really intend to put filters at the edge or would not prefer to do it at the entrance to your network. Slammer virus is just like DoS, that is why many are filtering it at the highiest possible level as well as at all points where traffic comes in from the customers.
Actually, no, it is not theory. When you are slammed with N gigabits/sec of traffic hitting your network, if you do not have enough capacity to deal with the attack, no amount of filtering will help you, since by the time you apply a filter it is already too late - the incoming lines have no place for "non-evil" packets. Leave content filtering to the ES, and *force* ES to filter the content. Let IS be busy moving bits. Alex
On Wed, 29 Oct 2003, Alex Yuriev wrote:
application traffic", and we should not do that. It should not be the goal of IS to enforce the policy for the traffic that passes through it. That type of enforcement should be left to ES.
Well, that is nice thery, but I'd like to see how you react to 2Gb DoS attack and if you really intend to put filters at the edge or would not prefer to do it at the entrance to your network. Slammer virus is just like DoS, that is why many are filtering it at the highiest possible level as well as at all points where traffic comes in from the customers.
Actually, no, it is not theory.
When you are slammed with N gigabits/sec of traffic hitting your network, if you do not have enough capacity to deal with the attack, no amount of filtering will help you, since by the time you apply a filter it is already too late - the incoming lines have no place for "non-evil" packets.
This concept does not work on every network. You may very well have enough capacity to handle all the traffic from upstream provider (you probably don't want to and will ask them to filter as well) but actual line to the POP where customer is connected maybe smaller or even if you do have enough capacity to the POP, the extra traffic going there will greatly effect IGP routing on the network and may cause problems for customers in completely different cities.
Leave content filtering to the ES, and *force* ES to filter the content. Its not content filtering, I'm not filtering only certain html traffic (like access to porn sites), I'm filtering traffic that is causing harm to my network and if I know what traffic is causing problems for me, I'll filter it first chance I get.
-- William Leibzon Elan Networks william@elan.net
Leave content filtering to the ES, and *force* ES to filter the content. Its not content filtering, I'm not filtering only certain html traffic (like access to porn sites), I'm filtering traffic that is causing harm to my network and if I know what traffic is causing problems for me, I'll filter it first chance I get.
It is content filtering. You are filtering packets that you think are causing problems to the ES that you may not control. Alex
Recently, alex@yuriev.com (Alex Yuriev) wrote:
Leave content filtering to the ES, and *force* ES to filter the content. Its not content filtering, I'm not filtering only certain html traffic (like access to porn sites), I'm filtering traffic that is causing harm to my network and if I know what traffic is causing problems for me, I'll filter it first chance I get.
It is content filtering. You are filtering packets that you think are causing problems to the ES that you may not control. Alex
Alex, please re-read the first paragraph. He said "I'm filtering traffic that is causing harm to *my* network..." (emphasis mine). He's not filtering out packets he thinks are causing problems to the ES, he's filtering out packets that are causing him problems directly, as the IS. Matt
Alex, please re-read the first paragraph. He said "I'm filtering traffic that is causing harm to *my* network..." (emphasis mine).
He's not filtering out packets he thinks are causing problems to the ES, he's filtering out packets that are causing him problems directly, as the IS.
And since the IS is not the ES, it SHOULD NOT be filtering based on content since it is NOT IS's content. Again, *force* ES to filter and hold it responsible for not doing it. Alex
At 02:41 PM 10/30/2003, Alex Yuriev wrote:
Alex, please re-read the first paragraph. He said "I'm filtering traffic that is causing harm to *my* network..." (emphasis mine).
He's not filtering out packets he thinks are causing problems to the ES, he's filtering out packets that are causing him problems directly, as the IS.
And since the IS is not the ES, it SHOULD NOT be filtering based on content since it is NOT IS's content. Again, *force* ES to filter and hold it responsible for not doing it.
Do you have a generator in your colo/server space? Why? To follow your logic out, should you not simply be *forcing* the Electric Company to provide power and hold it responsible for not doing so? ( Hmm, no that is slightly different as you are direct customer ). Better example if you are UPS and a package being shipped is emitting RF that is interferring with your plane avionics, should you not remove that package from the shipment ( filter it out, as it were )? Or do you simply carry on and crash the plane, destroying the other packages onboard and simply try to hold the sender of the "bad" package responsible? It is sound business logic that if something is impacting your ability to provide service *and* you are provided with the means to address the problem, that you should utilize those means ( w/ in the extent allowed by the law and your legal agreements ). -Chris -- \\\|||/// \ StarNet Inc. \ Chris Parker \ ~ ~ / \ WX *is* Wireless! \ Director, Engineering | @ @ | \ http://www.starnetwx.net \ (847) 963-0116 oOo---(_)---oOo--\------------------------------------------------------ \ Wholesale Internet Services - http://www.megapop.net
to the ES, he's filtering out packets that are causing him problems directly, as the IS. And since the IS is not the ES, it SHOULD NOT be filtering based on content since it is NOT IS's content. Again, *force* ES to filter and hold it responsible for not doing it. Do you have a generator in your colo/server space? Why? To follow your logic out, should you not simply be *forcing* the Electric Company to provide power and hold it responsible for not doing so? ( Hmm, no that is slightly different as you are direct customer ).
I am so glad that you used that example. The way currently people propose everyone operates is equivalent to a company that transmits AC to customer deciding that some part of the AC waveform is "harmful" to its equipment, and therefore should be filtered out. Of course, no one bothers to tell the customer that the filter exists, or what is being filtered, or when, or how.
Better example if you are UPS and a package being shipped is emitting RF that is interferring with your plane avionics, should you not remove that package from the shipment ( filter it out, as it were )?
Another excellent example - UPS will not remove that. The shipper will.
It is sound business logic that if something is impacting your ability to provide service *and* you are provided with the means to address the problem, that you should utilize those means ( w/ in the extent allowed by the law and your legal agreements ).
The first part of any legal agreement establishes the parties subject to it. That is exactly what you are missing while being an IS. Alex
At 03:25 PM 10/30/2003, Alex Yuriev wrote:
to the ES, he's filtering out packets that are causing him problems directly, as the IS. And since the IS is not the ES, it SHOULD NOT be filtering based on content since it is NOT IS's content. Again, *force* ES to filter and hold it responsible for not doing it. Do you have a generator in your colo/server space? Why? To follow your logic out, should you not simply be *forcing* the Electric Company to provide power and hold it responsible for not doing so? ( Hmm, no that is slightly different as you are direct customer ).
I am so glad that you used that example.
The way currently people propose everyone operates is equivalent to a company that transmits AC to customer deciding that some part of the AC waveform is "harmful" to its equipment, and therefore should be filtered out. Of course, no one bothers to tell the customer that the filter exists, or what is being filtered, or when, or how.
So, electric grids do not have any mechanisms to disconnect from other grids ( ie, stop "transiting" their electricity ) if one is doing something that causes problems on the local grid? As a customer I would very much like my provider to filter out waveforms that would prevent their ability to provide me with my service. If the issue is how to communicate what is being filtered to the customer, then simply need to find a way to do that. The solution to "it is hard to communicate what is being filtered to the end-users" is not "oh well, we won't filter anything". At least not as I see it. Supposing a network *did* provide a way to inform customers what was being filtered. Would you still object to the filtering?
Another excellent example - UPS will not remove that. The shipper will.
How? I'm the shipper. I put the RF generating device into package and give it to UPS. They will do nothing to remove it or not ship it? It is only up to me to not do it? Al Qaeda would love that to be true I'm sure. :)
The first part of any legal agreement establishes the parties subject to it. That is exactly what you are missing while being an IS.
There is a chain of agreements connecting you to the source/dest of any traffic on your network. Even if it is a customer of a customer of a customer, you have a chain of agreements that establishes you as a party. In what scenario would there not be a chain of agreements to connect you as a party? -Chris -- \\\|||/// \ StarNet Inc. \ Chris Parker \ ~ ~ / \ WX *is* Wireless! \ Director, Engineering | @ @ | \ http://www.starnetwx.net \ (847) 963-0116 oOo---(_)---oOo--\------------------------------------------------------ \ Wholesale Internet Services - http://www.megapop.net
The way currently people propose everyone operates is equivalent to a company that transmits AC to customer deciding that some part of the AC waveform is "harmful" to its equipment, and therefore should be filtered out. Of course, no one bothers to tell the customer that the filter exists, or what is being filtered, or when, or how.
So, electric grids do not have any mechanisms to disconnect from other grids ( ie, stop "transiting" their electricity ) if one is doing something that causes problems on the local grid? As a customer I would very much like my provider to filter out waveforms that would prevent their ability to provide me with my service.
They disconnect the SOURCE of the problem forcing the SOURCE to behave. That is equivalent of forcing the ES to behave.
If the issue is how to communicate what is being filtered to the customer, then simply need to find a way to do that. The solution to "it is hard to communicate what is being filtered to the end-users" is not "oh well, we won't filter anything". At least not as I see it.
Traffic to port X cannot be specified as valid or invalid for any IS, because the IS does not know why such traffic exists. Traffic ES<->ES on port X can be valid or invalid because ES knows if it is valid traffic. If you want to filter that traffic, filter it for a specific ES (the one that does not want it) and force whoever is sending you that traffic to play nicely. It is DIFFERENT from saying "We drop all packets that match port X"
Supposing a network *did* provide a way to inform customers what was being filtered. Would you still object to the filtering?
If I request that traffic, of course I would object!
Another excellent example - UPS will not remove that. The shipper will.
How? I'm the shipper. I put the RF generating device into package and give it to UPS. They will do nothing to remove it or not ship it? It is only up to me to not do it? Al Qaeda would love that to be true I'm sure. :)
After that package is removed, you, the shipper, are going to have your hands slapped very hard, which will force you in future to behave. By doing this, we successfully enforced ES filtering.
The first part of any legal agreement establishes the parties subject to it. That is exactly what you are missing while being an IS.
There is a chain of agreements connecting you to the source/dest of any traffic on your network. Even if it is a customer of a customer of a customer, you have a chain of agreements that establishes you as a party.
In what scenario would there not be a chain of agreements to connect you as a party?
Even if I have agreement with you that you sell me a GSR for $5.00, which you have agreement with RS to get from him, I do not have agreement with RS that lets me get the GSR from him for $5. Alex
At 03:54 PM 10/30/2003, Alex Yuriev wrote:
The way currently people propose everyone operates is equivalent to a company that transmits AC to customer deciding that some part of the AC waveform is "harmful" to its equipment, and therefore should be filtered out. Of course, no one bothers to tell the customer that the filter exists, or what is being filtered, or when, or how.
So, electric grids do not have any mechanisms to disconnect from other grids ( ie, stop "transiting" their electricity ) if one is doing something that causes problems on the local grid? As a customer I would very much like my provider to filter out waveforms that would prevent their ability to provide me with my service.
They disconnect the SOURCE of the problem forcing the SOURCE to behave. That is equivalent of forcing the ES to behave.
The source of the problem of bad packets is where they ingress to my network. I disconnect the flow of bad packets thorugh filtering. What is the difference, other than I do not remove an entire interconnect, only the portion of packets that is affecting my ability to provide services?
If the issue is how to communicate what is being filtered to the customer, then simply need to find a way to do that. The solution to "it is hard to communicate what is being filtered to the end-users" is not "oh well, we won't filter anything". At least not as I see it.
Traffic to port X cannot be specified as valid or invalid for any IS, because the IS does not know why such traffic exists. Traffic ES<->ES on port X can be valid or invalid because ES knows if it is valid traffic. If you want to filter that traffic, filter it for a specific ES (the one that does not want it) and force whoever is sending you that traffic to play nicely. It is DIFFERENT from saying "We drop all packets that match port X"
Consider the recent scanning behaviour of the Nachi/Welchia worms. You have now *many* sources, and *many* destinations. Due to the overwhelming traffic ( considering that several commonly used networking devices were not able to keep a forwarding table due to the size of all the src/dest pairs ) causing problems on the network, what steps would you suggest be taken? Consider you are running a network with 10's of thousands of end-users connecting and disconnecting at random points in the network. Do you enter a specific reflexive rule for every src/dst pair? Or do you implement wide-scale filtering of the traffic if it is easily identifiable based on the "signature" of src port/dst port/payload?
Supposing a network *did* provide a way to inform customers what was being filtered. Would you still object to the filtering?
If I request that traffic, of course I would object!
And if service goes down for you, as I serve a DOS to another customer, would you also object in that case? Even if other customer had not yet complained to me about the DOS?
Another excellent example - UPS will not remove that. The shipper will.
How? I'm the shipper. I put the RF generating device into package and give it to UPS. They will do nothing to remove it or not ship it? It is only up to me to not do it? Al Qaeda would love that to be true I'm sure. :)
After that package is removed, you, the shipper, are going to have your hands slapped very hard, which will force you in future to behave. By doing this, we successfully enforced ES filtering.
Right, and that assumes that every ES wants to do the right thing, and knows better. Just like everybody used to have open SMTP relaying as people who did bad things with SMTP got their hands slapped. And since UPS is rejecting only certain packages, they have just implemented filtering as an IS based on the contents of the package they are being asked to carry, despite my desire as a shipper to ship it, and a corresponding desire of the receiver to receive it.
There is a chain of agreements connecting you to the source/dest of any traffic on your network. Even if it is a customer of a customer of a customer, you have a chain of agreements that establishes you as a party.
In what scenario would there not be a chain of agreements to connect you as a party?
Even if I have agreement with you that you sell me a GSR for $5.00, which you have agreement with RS to get from him, I do not have agreement with RS that lets me get the GSR from him for $5.
I don't see how that is the same thing here. I have an agreement with cust X to provide services in accordance with my AUP. cust X resells that service to cust Y, etc. cust Y is bound to the terms and conditions of my agreement with cust X, despite that I do not have a direct agreement with cust Y. -Chris -- \\\|||/// \ StarNet Inc. \ Chris Parker \ ~ ~ / \ WX *is* Wireless! \ Director, Engineering | @ @ | \ http://www.starnetwx.net \ (847) 963-0116 oOo---(_)---oOo--\------------------------------------------------------ \ Wholesale Internet Services - http://www.megapop.net
On Thu, 30 Oct 2003, Chris Parker wrote:
The source of the problem of bad packets is where they ingress to my network. I disconnect the flow of bad packets thorugh filtering. What is the difference, other than I do not remove an entire interconnect, only the portion of packets that is affecting my ability to provide services?
If the *content* of the packets is breaking your network: Your network is obviously broken.
Tell that to Cisco, Nortel, and any other vendor that can handle huge rates of traffic that conform to "typical" but, when the pattern of addresses (or options) in the packets cause the flow cache to thrash, die under loads far below line rate. (See Cisco's http://www.cisco.com/warp/public/63/ts_codred_worm.shtml as an example) Tell that to any router, switch, or end system vendor who recently found out what happened when a worm forces near-simultaneous arp requests for every possible address on a subnet. I'm afraid that those of us building actual networks are forced to do so using actual hardware that actually exists today, and using actual hardware that was actually purchased several years ago and which cannot be forklifted out. You call the network "obviously broken", I call it "the only one that can be built today". Matthew Kaufman matthew@eeph.com
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Greg Maxwell Sent: Thursday, October 30, 2003 7:48 PM To: Chris Parker Cc: Alex Yuriev; nanog@merit.edu Subject: Re: more on filtering
On Thu, 30 Oct 2003, Chris Parker wrote:
The source of the problem of bad packets is where they ingress to my network. I disconnect the flow of bad packets thorugh filtering. What is the difference, other than I do not remove an entire interconnect, only the portion of packets that is affecting my ability to provide services?
If the *content* of the packets is breaking your network: Your network is obviously broken.
On Fri, 31 Oct 2003, Matthew Kaufman wrote: [snip]
I'm afraid that those of us building actual networks are forced to do so using actual hardware that actually exists today, and using actual hardware that was actually purchased several years ago and which cannot be forklifted out.
You call the network "obviously broken", I call it "the only one that can be built today".
It's interesting that many rather sizable networks have weathered these events without relying on filtering, NAT, or other such behavior. Even if you're right, that doesn't make me wrong. Any IP network conformant to Internet standards should be content transparent. Any network which isn't is broken. Breaking under abnormal conditions is unacceptable. I am well aware of reality, but the reality is: some things need to be improved. This isn't some fundamental law of nature causing these limits. We are simply seeing the results of the "internet boom" valuation of rapid growth and profit over correctness and stability. As the purchasers of this equipment we have the power to demand vendors produce products which are not broken. Doing so is our professional duty, settling on workarounds that break communications and fail to actually solve the problems is negligent. Suggesting that breaking end-to-endness is a long term solution to these kind of issues is socially irresponsible. -- The comments and opinions expressed herein are those of the author of this message and may not reflect the policies of the Martin County Board of County Commissioners.
It's interesting that many rather sizable networks have weathered these events without relying on filtering, NAT, or other such behavior.
What's more interesting is how many big networks have implemented 98-byte ICMP filters, blocks on port 135, and other filters on a temporary basis on one or more (but not all) interfaces, without anyone really noticing that they're doing that. It isn't something that's well-publicized, but I know several major ISPs/NSPs which have had such filters in place, at least briefly, on either congested edge interfaces or between core and access routers to prevent problems with devices like TNTs and Shastas.
Even if you're right, that doesn't make me wrong.
True enough.
Any IP network conformant to Internet standards should be content transparent. Any network which isn't is broken.
Then they're all broken, to one extent or another. Even a piece of wire can be subjected to a denial of service attack that prevents your content from transparently reaching the far end.
Breaking under abnormal conditions is unacceptable. I am well aware of reality, but the reality is: some things need to be improved.
That some thing need to be improved has been true since the very first day the Internet began operation. Of course, the users of the end systems were somewhat better behaved for the first few years, and managed to resist the temptation to deploy widespread worms until 1988.
This isn't some fundamental law of nature causing these limits. We are simply seeing the results of the "internet boom" valuation of rapid growth and profit over correctness and stability.
True.
As the purchasers of this equipment we have the power to demand vendors produce products which are not broken.
One can demand all one wants. Getting such a product can be nearly or totally impossible, depending on which features you need at the same time.
Doing so is our professional duty, settling on workarounds that break communications and fail to actually solve the problems is negligent.
But not using the workarounds that one has available in order to keep the network mostly working, and instead standing back and throwing up one's hands and saying "well, all the hardware crashed, guess our network is down entirely today" is even more negligent. It may also be a salary-reducing move.
Suggesting that breaking end-to-endness is a long term solution to these kind of issues is socially irresponsible.
Waiting until provably-correct routers are built, and cheap enough to deploy, may be socially irresponsible as well. There's a whole lot of good that has come out of cheap broadband access, and we'd still be waiting if we insisted on bug-free CPE and bug-free aggregation boxes that could handle any traffic pattern thrown at them. Do you actually believe that it was a BAD idea for Cisco to build a router that is more efficient (to the point of being able to handle high-rate interfaces at all) when presented with traffic flows that look like real sessions? Matthew Kaufman matthew@eeph.com
Do you actually believe that it was a BAD idea for Cisco to build a router that is more efficient (to the point of being able to handle high-rate interfaces at all) when presented with traffic flows that look like real sessions?
Why buy something that works well only sometimes ("we are very efficient when it looks like 'real' traffic" from Cisco) when you can buy ("no one told us that we should have issues with some specific packets") Juniper? Alex
Well, interestingly, in our network, Juniper makes all of our new core routers. Specifically because Cisco routers were melting down at an unacceptable rate. But there was no such thing as Juniper when we started building (so we still have a lot of Cisco routers in the network), and they don't make DSLAMs or DSL/ATM customer aggregation boxes, so we still get to deal with traffic-dependent performance. And I'm sure we're not the only network in this situation. Should I replace every box in the network with a Juniper and pass the cost along to the customers? (New line item on the bills: "we won't filter worm traffic tax") Even if I had an all-Juniper network, I'd still need to decide what to do about DDOS attacks... Do I just call my circuit vendors and keep adding OC48s until the problem goes away? Matthew Kaufman matthew@eeph.com
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Alex Yuriev Sent: Friday, October 31, 2003 6:29 AM To: Matthew Kaufman Cc: 'Greg Maxwell'; 'Chris Parker'; nanog@merit.edu Subject: RE: more on filtering
Do you actually believe that it was a BAD idea for Cisco to build a router that is more efficient (to the point of being able to handle high-rate interfaces at all) when presented with traffic flows that look like real sessions?
Why buy something that works well only sometimes ("we are very efficient when it looks like 'real' traffic" from Cisco) when you can buy ("no one told us that we should have issues with some specific packets") Juniper?
Alex
Even if I had an all-Juniper network, I'd still need to decide what to do about DDOS attacks... Do I just call my circuit vendors and keep adding OC48s until the problem goes away?
But isn't this just trying to put a square peg into a round hole? Wouldn't it be better to let routers route, switches switch, and filter boxen filter? I know people like to have routers talk directly to each other, but there are certain high capacity upper layer filter boxen out there that, when inserted into the link, can handle this nastiness, so a router doesn't over-work its designed-to-be-lazy processor. -- Scanned for viruses and dangerous content at http://www.oneunified.net and is believed to be clean.
Recently, alex@yuriev.com (Alex Yuriev) wrote:
So, electric grids do not have any mechanisms to disconnect from other grids ( ie, stop "transiting" their electricity ) if one is doing something that causes problems on the local grid? As a customer I would very much like my provider to filter out waveforms that would prevent their ability to provide me with my service.
They disconnect the SOURCE of the problem forcing the SOURCE to behave. That is equivalent of forcing the ES to behave.
Unfortunately, as the Northeast seaboard of the US discovered not too long ago, the electrical system is somewhat like the Internet; it attempts to route around failures, meaning that simply shutting down the link along which the damaging waveform is propagating does not prevent it from entering your grid; it simply follows a different pathway in. And in shutting down the direct pathway, you may well cause more stability problems as the flow shifts onto alternate interconnects. Likewise, if I am network A, and a customer of mine is sending attack packets towards a customer of network B, simply shutting down the peering links between network A and network B does nothing to prevent the attack packets from entering network B. Network B would have to isolate itself completely from the rest of the Internet core in order to ensure my bad packets did not enter their network. Anything less, and as long as there is some transit path that can be used to get from my network to network B, the attack packets will still flow and enter network B. I don't think anyone here would defend isolating themselves from the rest of the Internet as being a "better" solution than say putting in filters to block port 1434 traffic.
Traffic to port X cannot be specified as valid or invalid for any IS, because the IS does not know why such traffic exists.
We're not saying the traffic is invalid; we're saying the traffic is causing us harm. As with most organisms, there is a strong instinct for self-preservation. If the traffic is causing extensive degredation to the IS, it's better for the IS to try to preserve itself by limiting the impact of the traffic, regardless of whether it is valid or not. I'm starting to get the sense that you've never actually been in the hot seat of a major network before, so for the sake of everyone who has, who is no doubt getting rather tired of your stubborn stance, I'll make this my last public response on the issue. Feel free to continue this via private email if you'd like.
Alex
Matt
On Thu, 30 Oct 2003 12:12:22 EST, Alex Yuriev said:
Leave content filtering to the ES, and *force* ES to filter the content. Its not content filtering, I'm not filtering only certain html traffic (like access to porn sites), I'm filtering traffic that is causing harm to my network and if I know what traffic is causing problems for me, I'll filter it first chance I get.
It is content filtering. You are filtering packets that you think are causing problems to the ES that you may not control.
No, he said quite clearly he's filtering packets (such as Nachi ICMP) that are causing harm to *his* network. He gets to make a choice - filter the known problem packets so the rest of the traffic can get through, or watch the network melt down and nobody gets anything.
It is content filtering. You are filtering packets that you think are causing problems to the ES that you may not control.
No, he said quite clearly he's filtering packets (such as Nachi ICMP) that are causing harm to *his* network. He gets to make a choice - filter the known problem packets so the rest of the traffic can get through, or watch the network melt down and nobody gets anything.
He needs to fix his network so those 92 byte ICMP packets wont break it. Alex
Are you actually saying that providers in the middle should build their networks to accommodate any amount of DDOS traffic their ingress can support instead of filtering it at their edge? How do you expect them to pay for that? Do you really want $10,000/megabit transit costs? Owen --On Friday, October 31, 2003 7:43 AM -0500 Alex Yuriev <alex@yuriev.com> wrote:
It is content filtering. You are filtering packets that you think are causing problems to the ES that you may not control.
No, he said quite clearly he's filtering packets (such as Nachi ICMP) that are causing harm to *his* network. He gets to make a choice - filter the known problem packets so the rest of the traffic can get through, or watch the network melt down and nobody gets anything.
He needs to fix his network so those 92 byte ICMP packets wont break it.
Alex
-- If it wasn't signed, it probably didn't come from me.
Are you actually saying that providers in the middle should build their networks to accommodate any amount of DDOS traffic their ingress can support instead of filtering it at their edge? How do you expect them to pay for that? Do you really want $10,000/megabit transit costs?
I remember GM saying something like that about this car that put Nader on political arena. Are we that dumb that we need to be taught the same lessons? Fix the networks. Force the customers to play by the rules. Alex
I remember GM saying something like that about this car that put Nader on political arena. Are we that dumb that we need to be taught the same lessons?
GM seems to still be building cars and trucks, and Nader lost a presidential election. Which lesson were we supposed to learn? Matthew Kaufman matthew@eeph.com
I remember GM saying something like that about this car that put Nader on political arena. Are we that dumb that we need to be taught the same lessons? GM seems to still be building cars and trucks, and Nader lost a presidential election.
GM seems to also have cut a very big check to pay the judgements. Alex
Recently, alex@yuriev.com (Alex Yuriev) wrote:
On Wed, 29 Oct 2003, Alex Yuriev wrote:
As the network operators, we move bits and that is what we should stick to moving.
We do not look into packets and see "oh look, this to me looks like an evil application traffic", and we should not do that. It should not be the goal of IS to enforce the policy for the traffic that passes through it. That type of enforcement should be left to ES.
Well, that is nice thery, but I'd like to see how you react to 2Gb DoS attack and if you really intend to put filters at the edge or would not prefer to do it at the entrance to your network. Slammer virus is just like DoS, that is why many are filtering it at the highiest possible level as well as at all points where traffic comes in from the customers.
Actually, no, it is not theory.
When you are slammed with N gigabits/sec of traffic hitting your network, if you do not have enough capacity to deal with the attack, no amount of filtering will help you, since by the time you apply a filter it is already too late - the incoming lines have no place for "non-evil" packets.
And how many people here operate non-oversubscribed networks? I mean completely non-oversubscribed end to end; every end customer link's worth of capacity is reserved through the network from the customer edge access point, to the aggregation routers, through the core routers and backbone links out to the peering points, down to the border routers, and out through the peering ports? I've worked at serveral different companies, and none of them have run truly non oversubscribed networks; the economics just aren't there to support doing that. So having 3 Gb of DoS traffic coming across a half dozen peering OC48s isn't that bad; but having it try to fit onto a pair of OC48s into the backbone that are already running at 40% capacity means you're SOL unless you filter some of that traffic out. And I've been in that situation more times than I'd like to remember, because you can't justify increasing capacity internally from a remote peering point into the backbone simply to be able to handle a possible DoS attack. Even if you _do_ upgrade capacity there, and you carry the extra 3Gb of traffic from your peering links through your core backbone, and off to your access device, you suddenly realize that the gig port on your access device is now hosed. You can then filter the attack traffic out on the device just upstream of the access box, but then you're carrying it through your core only to throw it away after using up backbone capacity; why not discard it sooner rather than later, if you're going to have to discard it anyhow?
Leave content filtering to the ES, and *force* ES to filter the content. Let IS be busy moving bits. Alex
I think you'll find very, very few networks can follow that model; the IS component almost invariably has some level of statistical aggregation of traffic occurring that forces packet discard to occur during heavy attack or worm activity. And under those circumstances, there is a strong preference to discard "bad" traffic rather than "good" traffic if at all possible. One technique we currently use for making those decisions is looking at the type of packets; are they 92 byte ICMP packets, are they TCP packets destined for port 1434, etc. I'd be curious to see what networks you know of where the IS component does *no* statistical aggregation of traffic whatsoever. :) Matt
And how many people here operate non-oversubscribed networks?
The right question here should be "How many people here operate non-super oversubscribed networks?" Oversubscribed by a a few percents is one thing, oversubscribed the way certain cable company in NEPA does it is another.[1]
So having 3 Gb of DoS traffic coming across a half dozen peering OC48s isn't that bad; but having it try to fit onto a pair of OC48s into the backbone that are already running at 40% capacity means you're SOL unless you filter some of that traffic out.
Why does your backbone have only two OC48s that are 40% utilized if you have half a dozen peering OC48s that can easily take those 3Gb/sec?
And I've been in that situation more times than I'd like to remember, because you can't justify increasing capacity internally from a remote peering point into the backbone simply to be able to handle a possible DoS attack.
This means that the PNIs of such network are full already. So we are back to the super-oversubscribed issue.
Even if you _do_ upgrade capacity there, and you carry the extra 3Gb of traffic from your peering links through your core backbone, and off to your access device, you suddenly realize that the gig port on your access device is now hosed. You can then filter the attack traffic out on the device just upstream of the access box, but then you're carrying it through your core only to throw it away after using up backbone capacity; why not discard it sooner rather than later, if you're going to have to discard it anyhow?
Because you do not know what is the "evil" traffic and what is the "good" traffic.
And under those circumstances, there is a strong preference to discard "bad" traffic rather than "good" traffic if at all possible. One technique we currently use for making those decisions is looking at the type of packets; are they 92 byte ICMP packets, are they TCP packets destined for port 1434, etc.
And this technique presumes that the backbone routers know what are the packets that their customers are want to go through and which ones they do not. Again, this is not a job of backbone routers. It is a kluge that should be accepted as a kludge.
I'd be curious to see what networks you know of where the IS component does *no* statistical aggregation of traffic whatsoever. :)
The example that you are using is not based on statistical traffic aggregation. Rather it is based on an arbitrary decision of what is good and what is bad traffic (just like certain operators that claimed that DHS ordered them to block certain ports).
Matt
Alex [1] Bring three T1s of IP. Sell service to serveral hundred cable customers.
participants (11)
-
Alex Yuriev
-
Chris Parker
-
Greg Maxwell
-
Kuhtz, Christian
-
Leo Bicknell
-
matt@petach.org
-
Matthew Kaufman
-
Owen DeLong
-
Ray Burkholder
-
Valdis.Kletnieks@vt.edu
-
william@elan.net