If someone can identify what you are actually seeing, I'll check into it. If you are experiencing drops or slow traces, only through the core, there is an issue with excessive de-prioritization of ICMP control message with a particular router type (vendcor) in the core. End to end data flow has not seemed to be affected but trace and ping core latencies are looking very wierd. I've been asking customers to use trace only for path detail and to use end to end ping for any performance data. Yes, the core is MPLS enabled. Diffserv acted on only at the edges though. Michelle Michelle Truman CCIE # 8098 Principal Technical Consultant AT&T Solutions Center mailto:mtruman@att.com VO: 651-998-0949 w 612-376-5137 -----Original Message----- From: brett watson [mailto:brett@the-watsons.org] Sent: Wednesday, March 19, 2003 1:48 PM To: nanog@merit.edu Subject: Re: Problems with AT&T On Wednesday, Mar 19, 2003, at 12:28 America/Phoenix, Sean Donelan wrote:
On Wed, 19 Mar 2003, German Martinez wrote:
Anybody here seeing problems with AS7018 ?
...
...
If you report it to AT&T, they seem to get it fixed; but then the problems re-appear a few days later. I'm guessing that packet size is relevant, but I haven't spent much time trying to troubleshoot it.
isn't at&t heavily MPLSed? maybe something to do with mpls tunnels, or diff-serv marking?
If someone can identify what you are actually seeing, I'll check into it. If you are experiencing drops or slow traces, only through the core, there is an issue with excessive de-prioritization of ICMP control message with a particular router type (vendcor) in the core. End to end data flow has not seemed to be affected but trace and ping core latencies are looking very wierd. I've been asking customers to use trace only for path detail and to use end to end ping for any performance data.=20
Yes, the core is MPLS enabled. Diffserv acted on only at the edges though.=20
Michelle
It could certainly be customers who have broken themselves. I've heard lots of stories about people who do PMTUD but simultaneously filter ICMP Can't Frag messages. As soon as the Path MTU drops below whatever their local box is (usually 1500) they "break" although due to their own screwed up config. Since MPLS adds additional overhead, dropping the MTU, I'ld seriously consider this as a possible reason. The major problems are: 1) identifying broken customers 2) convincing customers that they are broken when they "haven't changed anything" 3) getting them to actually change Some folks just put off the problem until later by moving to MTUs > 1500. The only benefit to this is that hopefully when the customer next breaks it is as a direct result of them having "changed something" which gets you over the hurdle of convincing some person that their filtering of all ICMP isn't just stupid, but is also broken.
On Thu, Mar 20, 2003 at 03:26:35PM -0500, bdragon@gweep.net wrote:
If someone can identify what you are actually seeing, I'll check into it. If you are experiencing drops or slow traces, only through the core, there is an issue with excessive de-prioritization of ICMP control message with a particular router type (vendcor) in the core. End to end data flow has not seemed to be affected but trace and ping core latencies are looking very wierd. I've been asking customers to use trace only for path detail and to use end to end ping for any performance data.=20
Yes, the core is MPLS enabled. Diffserv acted on only at the edges though.=20
Michelle
It could certainly be customers who have broken themselves. I've heard lots of stories about people who do PMTUD but simultaneously filter ICMP Can't Frag messages.
As soon as the Path MTU drops below whatever their local box is (usually 1500) they "break" although due to their own screwed up config.
Since MPLS adds additional overhead, dropping the MTU, I'ld seriously consider this as a possible reason.
Speaking very generally and not about any one specific network, this is likely to not be the issue. MPLS leads to problems on Ethernet, but I've seen no problems in anything other than Eth/FE. GigE and POS haven't had the same issue; for one, default POS MTU is ~4k, which is more than enough to hold packets from hosts that assume 576 or 1500, and PMTU over an MPLS network takes the MPLS label stack size into account when doing discovery. Also, some implementations have framers that can accept a packet that's actually MTU+(N*4), where N is typically no more than 4, and more likely 2. And I think I can say without breaking any confidentially agreements that AT&T's backbone Probably Isn't (nudge nudge wink wink) made up of scads and scads of 10/100Mb links everywhere. :) The biggest problem you can have with MPLS is if you have customers who are connected at 4k or 9k or what have you, and who don't do PMTUD; I've not seen this come up as a real operational issue. .02 eric
The major problems are: 1) identifying broken customers 2) convincing customers that they are broken when they "haven't changed anything" 3) getting them to actually change
Some folks just put off the problem until later by moving to MTUs > 1500. The only benefit to this is that hopefully when the customer next breaks it is as a direct result of them having "changed something" which gets you over the hurdle of convincing some person that their filtering of all ICMP isn't just stupid, but is also broken.
On Thu, Mar 20, 2003 at 03:26:35PM -0500, bdragon@gweep.net wrote:
If someone can identify what you are actually seeing, I'll check into it. If you are experiencing drops or slow traces, only through the core, there is an issue with excessive de-prioritization of ICMP control message with a particular router type (vendcor) in the core. End to end data flow has not seemed to be affected but trace and ping core latencies are looking very wierd. I've been asking customers to use trace only for path detail and to use end to end ping for any performance data.=20
Yes, the core is MPLS enabled. Diffserv acted on only at the edges though.=20
Michelle
It could certainly be customers who have broken themselves. I've heard lots of stories about people who do PMTUD but simultaneously filter ICMP Can't Frag messages.
As soon as the Path MTU drops below whatever their local box is (usually 1500) they "break" although due to their own screwed up config.
Since MPLS adds additional overhead, dropping the MTU, I'ld seriously consider this as a possible reason.
Speaking very generally and not about any one specific network, this is likely to not be the issue. MPLS leads to problems on Ethernet, but I've seen no problems in anything other than Eth/FE. GigE and POS haven't had the same issue; for one, default POS MTU is ~4k, which is more than enough to hold packets from hosts that assume 576 or 1500, and PMTU over an MPLS network takes the MPLS label stack size into account when doing discovery.
Also, some implementations have framers that can accept a packet that's actually MTU+(N*4), where N is typically no more than 4, and more likely 2.
And I think I can say without breaking any confidentially agreements that AT&T's backbone Probably Isn't (nudge nudge wink wink) made up of scads and scads of 10/100Mb links everywhere. :)
All you need is 1 1500-byte MTU L2 network to have a customer with a broken PMTU-D setup experience a problem. Even if you have lots of GigE links, you often have some old gear connected to said switch which doesn't do JumboFrames, requiring that you preserve the LCD. As someone else mentioned earlier, if there was a per-neighbor MTU this would resolve a huge part of the problem of transitioning to jumboframes, since it would permit staged upgrades rather than forklift all-or-nothing upgrades (always something to be avoided if possible). I can say from experience that these broken customers _are_ out there and generally refuse to admit/accept that doing PMTU-D while simultaneously filtering all ICMP is why they are broken. It isn't an MPLS problem, any encapsulation which reduces the effective MTU to below 1500 would tickle the customer's config bug. These days, however, few network operators are using additional encapsulations other than MPLS. Also, it isn't an ethernet problem, since the point at which the customer breaks is solely determined by their local MTU. As such, it just happens that most customers use ethernet as their lan media, and MTUs of less than 1500 are extremely rare (hiding their config problem).
The biggest problem you can have with MPLS is if you have customers who are connected at 4k or 9k or what have you, and who don't do PMTUD; I've not seen this come up as a real operational issue.
I fail to see how that would be a problem, except for the fragmentation issue. While fragmentation is a performance issue, it doesn't lead to lack of reachability. Whereas doing PMTU-D and simultaneously filtering the bits that actually make it work, does lead to reachability problems.
.02
eric
participants (3)
-
bdragon@gweep.net
-
Eric Osborne
-
Truman, Michelle, SALES