Per Site QOS policy with Cisco IOS-XE
I have a question for the QOS gurus out there. We are having some problems with packet loss for our smaller MPLS locations. This packet loss is due to the large speed differential on our Hub site(150mb/s) in comparison the the branch office locations(single T-1 to 4.5mb/s multilinks). This packet loss only seems to impact really bursty applications like our Web Proxy. I have been around and around with WindStream to give me some extra buffer or enable random early detection on the smaller interfaces in my MPLS network. So far they are unwilling to do a custom policy and none of their standard policies have enough buffer to handle the bursts. They do FIFO tail drop in every queue, so I can’t even choose a policy that has WRED implemented. I am looking for a way to solve the problem on my side. I can create a shaper for the proxy and match off an access-list to the smaller sites, but I am either forced to do bandwidth reservations for each site, or have multiple sites share the same shaper. Here is an example of what I was playing around with: ip access-list extended ProxyT1Sites permit tcp any host 10.x.x.x 10.x.x.x 0.0.0.255 permit tcp any host 10.x.x.x 10.x.x.x 0.0.0.255 class-map match-any ProxyShaperT1 match access-group name ProxyT1Sites policy-map WindStream class VOICE priority percent 25 set dscp ef class AF41 bandwidth percent 40 set dscp af41 queue-limit 1024 packets class ProxyShaperT1 shape average 1536 bandwidth percent 1 set dscp af21 queue-limit 1024 packets class class-default fair-queue set dscp af21 queue-limit 1024 packets Another Idea I had was to create a bunch of shaper classes all feeding the same child policy for priority queuing and bandwidth reservations based on DSCP markings. I’m just not exactly sure that this is allowed or supported. I also would run out of bandwidth allocation on the policy if I use the true bandwidth number of 150mb/s. It is on a Gig port so I could just take the bandwidth statement off of the interface to give myself enough room for all of the shaper allocations. Something like this(I am omitting the access-list that matches the branch subnet and class map for brevity): policy-map PerSiteShaper class FtSmith shape average 1536 bandwidth 1536 service policy Scheduler class Dallas shape average 4500 bandwidth 4500 service policy Scheduler class NYC shape average 100000 bandwidth 100000 service policy Scheduler class-default service-policy Scheduler policy-map Scheduler class VOICE priority percent 25 set dscp ef class AF41 bandwidth percent 40 set dscp af41 queue-limit 1024 packets class class-default fair-queue set dscp af21 queue-limit 1024 packets Just looking for some ideas that do not involve building tunnels to our remote offices. Thanks in advance, *Wes Tribble*
On 5/1/13, Wes Tribble <westribble@gmail.com> wrote:
I have a question for the QOS gurus out there.
cisco-nsp might be a better place to post your question. But in any case, this option looks right:
Another Idea I had was to create a bunch of shaper classes all feeding the same child policy for priority queuing and bandwidth reservations based on DSCP markings. I’m just not exactly sure that this is allowed or supported.
see http://puck.nether.net/pipermail/cisco-nsp/2007-October/044508.html So they just shaped at the hub towards the spoke to prevent overrunning the PE-CE link at the spoke. Another advantage was they didnt' waste hub-PE bandwidth for traffic that would be dropped at the spoke PE-CE link anyway. which has nothing to do with IOS-XE but does sound like what you're wanting to do. Regards, Lee
We are having some problems with packet loss for our smaller MPLS locations. This packet loss is due to the large speed differential on our Hub site(150mb/s) in comparison the the branch office locations(single T-1 to 4.5mb/s multilinks). This packet loss only seems to impact really bursty applications like our Web Proxy. I have been around and around with WindStream to give me some extra buffer or enable random early detection on the smaller interfaces in my MPLS network. So far they are unwilling to do a custom policy and none of their standard policies have enough buffer to handle the bursts. They do FIFO tail drop in every queue, so I can’t even choose a policy that has WRED implemented.
I am looking for a way to solve the problem on my side. I can create a shaper for the proxy and match off an access-list to the smaller sites, but I am either forced to do bandwidth reservations for each site, or have multiple sites share the same shaper. Here is an example of what I was playing around with:
ip access-list extended ProxyT1Sites
permit tcp any host 10.x.x.x 10.x.x.x 0.0.0.255
permit tcp any host 10.x.x.x 10.x.x.x 0.0.0.255
class-map match-any ProxyShaperT1
match access-group name ProxyT1Sites
policy-map WindStream
class VOICE
priority percent 25
set dscp ef
class AF41
bandwidth percent 40
set dscp af41
queue-limit 1024 packets
class ProxyShaperT1
shape average 1536
bandwidth percent 1
set dscp af21
queue-limit 1024 packets
class class-default
fair-queue
set dscp af21
queue-limit 1024 packets
Another Idea I had was to create a bunch of shaper classes all feeding the same child policy for priority queuing and bandwidth reservations based on DSCP markings. I’m just not exactly sure that this is allowed or supported. I also would run out of bandwidth allocation on the policy if I use the true bandwidth number of 150mb/s. It is on a Gig port so I could just take the bandwidth statement off of the interface to give myself enough room for all of the shaper allocations.
Something like this(I am omitting the access-list that matches the branch subnet and class map for brevity):
policy-map PerSiteShaper
class FtSmith
shape average 1536
bandwidth 1536 service policy Scheduler class Dallas
shape average 4500
bandwidth 4500 service policy Scheduler class NYC
shape average 100000
bandwidth 100000 service policy Scheduler class-default service-policy Scheduler
policy-map Scheduler class VOICE
priority percent 25
set dscp ef
class AF41
bandwidth percent 40
set dscp af41
queue-limit 1024 packets
class class-default
fair-queue
set dscp af21
queue-limit 1024 packets
Just looking for some ideas that do not involve building tunnels to our remote offices. Thanks in advance,
*Wes Tribble*
If you want to prevent a PE router from deciding which ingress packets to drop, the only plan is to send packets to spoke sites at or below the spoke line-rate. The only good way to do that is shaping on the hub router. policy-map parent_shaper class class-default shape average 100000000 < --- 100Mbps parent shaper. service-policy site_shaper policy-map site_shaper class t1_site shape average 1536000 service-policy qos_global class multilink_site shape average 3072000 service-policy qos_global class class-default service-policy qos_global policy-map qos_global ... whatever you typically use here.... Tyler Haske On Wed, May 1, 2013 at 5:03 PM, Wes Tribble <westribble@gmail.com> wrote:
I have a question for the QOS gurus out there.
We are having some problems with packet loss for our smaller MPLS locations. This packet loss is due to the large speed differential on our Hub site(150mb/s) in comparison the the branch office locations(single T-1 to 4.5mb/s multilinks). This packet loss only seems to impact really bursty applications like our Web Proxy. I have been around and around with WindStream to give me some extra buffer or enable random early detection on the smaller interfaces in my MPLS network. So far they are unwilling to do a custom policy and none of their standard policies have enough buffer to handle the bursts. They do FIFO tail drop in every queue, so I can’t even choose a policy that has WRED implemented.
Tyler, I would love to implement a policy similar to that one. Unfortunately, I don't believe you can have two tiers of shaping like that in a policy. Most of the two-tiered shaping solutions I have seen involve using a VRF to shape to the aggregate rate and then use a second VRF to shape to the site rate. This is to get around the three-tier policy limitations. With that said, if you have something like that configured and working, I would love to see the config and the "show policy-map interface" output. That is exactly the kind of policy I was originally looking to implement, but then I ran into those limitations. Thanks for the reply. Great idea in concept. If only we could implement. On Wed, May 8, 2013 at 9:02 AM, Tyler Haske <tyler.haske@gmail.com> wrote:
If you want to prevent a PE router from deciding which ingress packets to drop, the only plan is to send packets to spoke sites at or below the spoke line-rate. The only good way to do that is shaping on the hub router.
policy-map parent_shaper class class-default shape average 100000000 < --- 100Mbps parent shaper. service-policy site_shaper
policy-map site_shaper class t1_site shape average 1536000 service-policy qos_global class multilink_site shape average 3072000 service-policy qos_global class class-default service-policy qos_global
policy-map qos_global ... whatever you typically use here....
Tyler Haske
On Wed, May 1, 2013 at 5:03 PM, Wes Tribble <westribble@gmail.com> wrote:
I have a question for the QOS gurus out there.
We are having some problems with packet loss for our smaller MPLS locations. This packet loss is due to the large speed differential on our Hub site(150mb/s) in comparison the the branch office locations(single T-1 to 4.5mb/s multilinks). This packet loss only seems to impact really bursty applications like our Web Proxy. I have been around and around with WindStream to give me some extra buffer or enable random early detection on the smaller interfaces in my MPLS network. So far they are unwilling to do a custom policy and none of their standard policies have enough buffer to handle the bursts. They do FIFO tail drop in every queue, so I can’t even choose a policy that has WRED implemented.
Wes, If the router is running HQF code for QoS [really anything later then 12.4(20)T], it should support this kind of hierarchy. It's a common policy I have customers implement all the time. http://www.cisco.com/en/US/docs/ios/qos/configuration/guide/qos_frhqf_suppor... On Wed, May 8, 2013 at 10:54 AM, Wes Tribble <westribble@gmail.com> wrote:
Tyler,
I would love to implement a policy similar to that one. Unfortunately, I don't believe you can have two tiers of shaping like that in a policy. Most of the two-tiered shaping solutions I have seen involve using a VRF to shape to the aggregate rate and then use a second VRF to shape to the site rate. This is to get around the three-tier policy limitations.
With that said, if you have something like that configured and working, I would love to see the config and the "show policy-map interface" output. That is exactly the kind of policy I was originally looking to implement, but then I ran into those limitations.
Thanks for the reply. Great idea in concept. If only we could implement.
Thanks for the information Tyler, I will have to play around with that kind of policy in my lab. What would you suggest if you are oversubscribing the interface? With the child policy inheriting the bandwith of the parent shaper, wouldn't I run out of bandwidth allocation before I built all the shapers for all of my 29 sites? On Thu, May 9, 2013 at 7:14 AM, Tyler Haske <tyler.haske@gmail.com> wrote:
Wes,
If the router is running HQF code for QoS [really anything later then 12.4(20)T], it should support this kind of hierarchy. It's a common policy I have customers implement all the time.
http://www.cisco.com/en/US/docs/ios/qos/configuration/guide/qos_frhqf_suppor...
On Wed, May 8, 2013 at 10:54 AM, Wes Tribble <westribble@gmail.com> wrote:
Tyler,
I would love to implement a policy similar to that one. Unfortunately, I don't believe you can have two tiers of shaping like that in a policy. Most of the two-tiered shaping solutions I have seen involve using a VRF to shape to the aggregate rate and then use a second VRF to shape to the site rate. This is to get around the three-tier policy limitations.
With that said, if you have something like that configured and working, I would love to see the config and the "show policy-map interface" output. That is exactly the kind of policy I was originally looking to implement, but then I ran into those limitations.
Thanks for the reply. Great idea in concept. If only we could implement.
Wes, The earlier policy doesn't use bandwidth commands, hence, it doesn't *subscribe* anything. The only thing it does is ensures that individual sites do not exceed their shaped rate. You could add bandwidth statements if you wanted to ensure a certain site always is guaranteed a certain amount of bandwidth from the parent shaper. You can't oversubscribe with the bandwidth command. policy-map parent_shaper class class-default shape average 100000000 service-policy site_shaper policy-map site_shaper class t1_site shape average 1536000 bandwidth percent 1 service-policy qos_global class multilink_site shape average 3072000 bandwidth percent 2 service-policy qos_global class class-default bandwidth percent 97 service-policy qos_global policy-map qos_global ! ... whatever you want here. This would make sure that large sites don't stare out small spoke sites for bandwidth. On Thu, May 9, 2013 at 8:58 AM, Wes Tribble <westribble@gmail.com> wrote:
Thanks for the information Tyler, I will have to play around with that kind of policy in my lab. What would you suggest if you are oversubscribing the interface? With the child policy inheriting the bandwith of the parent shaper, wouldn't I run out of bandwidth allocation before I built all the shapers for all of my 29 sites?
Tyler, Tyler, I already had a case open with TAC on this issue. This is what the CCIE assigned to the case is saying about that type of policy: Hi Wesley, Yes, I’m afraid that configuration is not possible. We can only mark or police traffic on this child policy. You will see the following message when trying to attach the service-policy to the interface: --- ASR10004(config-if)#service-policy output parent_shaper Cannot attach queuing-based child policy to a non-queuing based class *This is what I sent to her:* So this configuration is not possible? policy-map parent_shaper class class-default shape average 100000000 < --- 100Mbps parent shaper. service-policy site_shaper policy-map site_shaper class t1_site shape average 1536000 service-policy qos_global class multilink_site shape average 3072000 service-policy qos_global class class-default service-policy qos_global policy-map qos_global class VOICE priority percent 25 set dscp ef class AF41 bandwidth percent 40 set dscp af41 queue-limit 1024 packets class class-default fair-queue set dscp af21 queue-limit 1024 packets On Thu, May 9, 2013 at 8:33 AM, Tyler Haske <tyler.haske@gmail.com> wrote:
Wes,
The earlier policy doesn't use bandwidth commands, hence, it doesn't *subscribe* anything. The only thing it does is ensures that individual sites do not exceed their shaped rate. You could add bandwidth statements if you wanted to ensure a certain site always is guaranteed a certain amount of bandwidth from the parent shaper. You can't oversubscribe with the bandwidth command.
policy-map parent_shaper class class-default shape average 100000000 service-policy site_shaper
policy-map site_shaper class t1_site shape average 1536000 bandwidth percent 1
service-policy qos_global class multilink_site shape average 3072000 bandwidth percent 2 service-policy qos_global class class-default bandwidth percent 97 service-policy qos_global
policy-map qos_global ! ... whatever you want here.
This would make sure that large sites don't stare out small spoke sites for bandwidth.
On Thu, May 9, 2013 at 8:58 AM, Wes Tribble <westribble@gmail.com> wrote:
Thanks for the information Tyler, I will have to play around with that kind of policy in my lab. What would you suggest if you are oversubscribing the interface? With the child policy inheriting the bandwith of the parent shaper, wouldn't I run out of bandwidth allocation before I built all the shapers for all of my 29 sites?
We had a similar problem years ago with a frame-relay <---> IMA setup. The hub end was a multiplexed ATM circuit with PVC's to each site's frame-relay circuit. The IMA speed was equal to the aggregate speed of each site's CIR. It worked great until all the sites were bursting above CIR. VoIP call quality would go really bad when that happened (early mornings mainly.) QoS policies could not be maintained between the frame and ATM sides. Sprint (support contract on the CPE) and MCI (circuits) engineers finally decided to run PPP over the native protocols on each end and then apply QoS to the PPP sessions. That fixed the problem. I am not sure if something like that is possible in your case or not though as I am not familiar with MPLS. I tried to find the config, but I no longer have it. I remember it used virtual templates, but don't remember the specifics. Jason On Thu, May 9, 2013 at 10:40 AM, Wes Tribble <westribble@gmail.com> wrote:
Tyler,
Tyler,
I already had a case open with TAC on this issue. This is what the CCIE assigned to the case is saying about that type of policy:
Hi Wesley,
Yes, I’m afraid that configuration is not possible. We can only mark or police traffic on this child policy.
You will see the following message when trying to attach the service-policy to the interface:
---
ASR10004(config-if)#service-policy output parent_shaper Cannot attach queuing-based child policy to a non-queuing based class
*This is what I sent to her:*
So this configuration is not possible?
policy-map parent_shaper class class-default shape average 100000000 < --- 100Mbps parent shaper. service-policy site_shaper
policy-map site_shaper class t1_site shape average 1536000 service-policy qos_global class multilink_site shape average 3072000 service-policy qos_global class class-default service-policy qos_global
policy-map qos_global class VOICE priority percent 25 set dscp ef class AF41 bandwidth percent 40 set dscp af41 queue-limit 1024 packets class class-default fair-queue set dscp af21 queue-limit 1024 packets
On Thu, May 9, 2013 at 8:33 AM, Tyler Haske <tyler.haske@gmail.com> wrote:
Wes,
The earlier policy doesn't use bandwidth commands, hence, it doesn't *subscribe* anything. The only thing it does is ensures that individual sites do not exceed their shaped rate. You could add bandwidth statements if you wanted to ensure a certain site always is guaranteed a certain amount of bandwidth from the parent shaper. You can't oversubscribe with the bandwidth command.
policy-map parent_shaper class class-default shape average 100000000 service-policy site_shaper
policy-map site_shaper class t1_site shape average 1536000 bandwidth percent 1
service-policy qos_global class multilink_site shape average 3072000 bandwidth percent 2 service-policy qos_global class class-default bandwidth percent 97 service-policy qos_global
policy-map qos_global ! ... whatever you want here.
This would make sure that large sites don't stare out small spoke sites for bandwidth.
On Thu, May 9, 2013 at 8:58 AM, Wes Tribble <westribble@gmail.com> wrote:
Thanks for the information Tyler, I will have to play around with that kind of policy in my lab. What would you suggest if you are oversubscribing the interface? With the child policy inheriting the bandwith of the parent shaper, wouldn't I run out of bandwidth allocation before I built all the shapers for all of my 29 sites?
-- Jason Lester Administrator for Instructional Technology Washington County Public Schools Tel: 276-739-3060 Fax: 276-628-1893 http://www.wcs.k12.va.us
Tyler, Thank you very much. I took off the bandwidth reservations on the child shapers and I was able to apply to an 1841 series router in my lab. Either my TAC engineer is off base or there is some limitatin with the ASR that does not exist for vanilla IOS. QUOTE: The earlier policy doesn't use bandwidth commands, hence, it doesn't *subscribe* anything. The only thing it does is ensures that individual sites do not exceed their shaped rate. You could add bandwidth statements if you wanted to ensure a certain site always is guaranteed a certain amount of bandwidth from the parent shaper. You can't oversubscribe with the bandwidth command. Here is a short snippet of the show policy-map int, i cut it off after two sites for brevity. Service-policy output: BigShaper Class-map: class-default (match-any) 31694 packets, 4932119 bytes 30 second offered rate 129000 bps, drop rate 0 bps Match: any Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 31723/4962574 shape (average) cir 50000000, bc 1250000, be 1250000 target shape rate 50000000 Service-policy : PerSiteShaper Class-map: LittleRock (match-any) 0 packets, 0 bytes 30 second offered rate 0 bps, drop rate 0 bps Match: access-group name LittleRockSubnets 0 packets, 0 bytes 30 second rate 0 bps Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 shape (average) cir 4608000, bc 115200, be 115200 target shape rate 4608000 Service-policy : Scheduler queue stats for all priority classes: queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 Class-map: VOICE (match-any) 0 packets, 0 bytes 30 second offered rate 0 bps, drop rate 0 bps Match: ip dscp ef (46) 0 packets, 0 bytes 30 second rate 0 bps Match: ip dscp cs3 (24) 0 packets, 0 bytes 30 second rate 0 bps Match: ip dscp af31 (26) 0 packets, 0 bytes 30 second rate 0 bps Priority: 50% (28 kbps), burst bytes 1500, b/w exceed drops: 0 QoS Set dscp ef Packets marked 0 Class-map: AF41 (match-any) 0 packets, 0 bytes 30 second offered rate 0 bps, drop rate 0 bps Match: ip dscp af41 (34) 0 packets, 0 bytes 30 second rate 0 bps Match: access-group name eCustodyClass 0 packets, 0 bytes 30 second rate 0 bps Match: access-group name BloombergClass 0 packets, 0 bytes 30 second rate 0 bps Match: access-group name LiquidPointClass 0 packets, 0 bytes 30 second rate 0 bps Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 bandwidth 25% (14 kbps) QoS Set dscp af41 Packets marked 0 Class-map: class-default (match-any) 0 packets, 0 bytes 30 second offered rate 0 bps, drop rate 0 bps Match: any Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops/flowdrops) 0/0/0/0 (pkts output/bytes output) 0/0 Fair-queue: per-flow queue limit 16 QoS Set dscp af21 Packets marked 0 Exp-weight-constant: 9 (1/512) Mean queue depth: 0 packets dscp Transmitted ECN Random drop Tail/Flow drop Minimum Maximum Mark pkts/bytes marked pkts/bytes pkts/bytes thresh thresh prob Class-map: Chicago (match-any) 0 packets, 0 bytes 30 second offered rate 0 bps, drop rate 0 bps Match: access-group name ChicagoSubnets 0 packets, 0 bytes 30 second rate 0 bps Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 shape (average) cir 10000000, bc 250000, be 250000 target shape rate 10000000 Service-policy : Scheduler queue stats for all priority classes: queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 Class-map: VOICE (match-any) 0 packets, 0 bytes 30 second offered rate 0 bps, drop rate 0 bps Match: ip dscp ef (46) 0 packets, 0 bytes 30 second rate 0 bps Match: ip dscp cs3 (24) 0 packets, 0 bytes 30 second rate 0 bps Match: ip dscp af31 (26) 0 packets, 0 bytes 30 second rate 0 bps Priority: 50% (28 kbps), burst bytes 1500, b/w exceed drops: 0 QoS Set dscp ef Packets marked 0 Class-map: AF41 (match-any) 0 packets, 0 bytes 30 second offered rate 0 bps, drop rate 0 bps Match: ip dscp af41 (34) 0 packets, 0 bytes 30 second rate 0 bps Match: access-group name eCustodyClass 0 packets, 0 bytes 30 second rate 0 bps Match: access-group name BloombergClass 0 packets, 0 bytes 30 second rate 0 bps Match: access-group name LiquidPointClass 0 packets, 0 bytes 30 second rate 0 bps Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 bandwidth 25% (14 kbps) QoS Set dscp af41 Packets marked 0 Class-map: class-default (match-any) 0 packets, 0 bytes 30 second offered rate 0 bps, drop rate 0 bps Match: any Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops/flowdrops) 0/0/0/0 (pkts output/bytes output) 0/0 Fair-queue: per-flow queue limit 16 QoS Set dscp af21 Packets marked 0 Exp-weight-constant: 9 (1/512) Mean queue depth: 0 packets dscp Transmitted ECN Random drop Tail/Flow drop Minimum Maximum Mark pkts/bytes marked pkts/bytes pkts/bytes thresh thresh prob
On 09/05/2013 17:10, Wes Tribble wrote:
Thank you very much. I took off the bandwidth reservations on the child shapers and I was able to apply to an 1841 series router in my lab. Either my TAC engineer is off base or there is some limitatin with the ASR that does not exist for vanilla IOS.
you shouldn't be surprised by this. the asr1k platform implements qos using dedicated silicon, which means that it's less flexible but much faster. The 1800 series routers handle everything using a single route processor and shared dram, which means that they are more flexible but much slower. Nick
participants (5)
-
Jason Lester
-
Lee
-
Nick Hilliard
-
Tyler Haske
-
Wes Tribble