Hi Nanog, Scenario: Transits -----(router A)Backbone(router B)----- Customers We applied Cisco CAR at the edge routers (B) in the Backbone to rate limit inbound and outbound traffics to/from Customers. If transmission rate is higher than the rate limit threshold, IP packets are being dropped by router B. How do we prevent the excess IP packets to consume the transit links and the Backbone? Here is my understanding: -For TCP traffics (HTTP, FTP), TCP senders will stop sending packets when the TCP windows threshold is reached. -For UDP based audio/video trafffics, if the applications use RTSP and H.323, RTCP/H.245 will signal the sender to slowdown the transmission if the receiver lost packets. Did I miss anything? How about UDP traffics that are not using RTSP/H.323? Thanks. Suan "Ken" Yeo Network Engineer Aurum Technology ken.yeo@aurumtechnology.com
On Thu, 18 Apr 2002, Ken Yeo wrote:
Hi Nanog,
Scenario:
Transits -----(router A)Backbone(router B)----- Customers
We applied Cisco CAR at the edge routers (B) in the Backbone to rate limit inbound and outbound traffics to/from Customers. If transmission rate is higher than the rate limit threshold, IP packets are being dropped by router B. How do we prevent the excess IP packets to consume the transit links and the Backbone? Here is my understanding:
You can't unless you CAR on all ingress interfaces on your network toward the customers... so: Ingress-Provider->RTA->RTBB->RTB->Customers You need to CAR on all 'Ingress-Provider' links, this is a very sticky problem (obviously)
-For TCP traffics (HTTP, FTP), TCP senders will stop sending packets when the TCP windows threshold is reached. -For UDP based audio/video trafffics, if the applications use RTSP and H.323, RTCP/H.245 will signal the sender to slowdown the transmission if the receiver lost packets.
Did I miss anything? How about UDP traffics that are not using RTSP/H.323?
Thanks.
Suan "Ken" Yeo Network Engineer Aurum Technology ken.yeo@aurumtechnology.com
Hi Christopher, If CAR is applied to the routers closest to upstream provider, the traffics can still consume the link to the provider. Plus the rate limit ACL will be huge. If we can apply CAR to the router at the upstream provider, the problem is solved. But of course we do not have access to the upstream equipments. Anyone has comments about the TCP window theory? Suan "Ken" Yeo Network Engineer Aurum Technology ken.yeo@aurumtechnology.com ----- Original Message ----- From: "Christopher L. Morrow" <chris@UU.NET> To: "Ken Yeo" <kenyeo@on-linecorp.com> Cc: <nanog@merit.edu> Sent: Thursday, April 18, 2002 11:40 AM Subject: Re: CAR
On Thu, 18 Apr 2002, Ken Yeo wrote:
Hi Nanog,
Scenario:
Transits -----(router A)Backbone(router B)----- Customers
We applied Cisco CAR at the edge routers (B) in the Backbone to rate
inbound and outbound traffics to/from Customers. If transmission rate is higher than the rate limit threshold, IP packets are being dropped by router B. How do we prevent the excess IP packets to consume the transit links and the Backbone? Here is my understanding:
You can't unless you CAR on all ingress interfaces on your network toward the customers... so:
Ingress-Provider->RTA->RTBB->RTB->Customers
You need to CAR on all 'Ingress-Provider' links, this is a very sticky problem (obviously)
-For TCP traffics (HTTP, FTP), TCP senders will stop sending packets
when
the TCP windows threshold is reached. -For UDP based audio/video trafffics, if the applications use RTSP and H.323, RTCP/H.245 will signal the sender to slowdown the transmission if
limit the
receiver lost packets.
Did I miss anything? How about UDP traffics that are not using RTSP/H.323?
Thanks.
Suan "Ken" Yeo Network Engineer Aurum Technology ken.yeo@aurumtechnology.com
On Thu, 18 Apr 2002, Ken Yeo wrote:
Hi Christopher,
If CAR is applied to the routers closest to upstream provider, the traffics can still consume the link to the provider. Plus the rate limit ACL will be huge. If we can apply CAR to the router at the upstream provider, the
Yes, car isn't a solution... which was my point... I made it obliquely sorry. Somewhere the packets have to backup, you can't tell the sources to stop so somewhere in the middle the traffic must backup :(
problem is solved. But of course we do not have access to the upstream equipments. Anyone has comments about the TCP window theory?
I venture to guess your upstream won't CAR for you either :)
Suan "Ken" Yeo Network Engineer Aurum Technology ken.yeo@aurumtechnology.com ----- Original Message ----- From: "Christopher L. Morrow" <chris@UU.NET> To: "Ken Yeo" <kenyeo@on-linecorp.com> Cc: <nanog@merit.edu> Sent: Thursday, April 18, 2002 11:40 AM Subject: Re: CAR
On Thu, 18 Apr 2002, Ken Yeo wrote:
Hi Nanog,
Scenario:
Transits -----(router A)Backbone(router B)----- Customers
We applied Cisco CAR at the edge routers (B) in the Backbone to rate
inbound and outbound traffics to/from Customers. If transmission rate is higher than the rate limit threshold, IP packets are being dropped by router B. How do we prevent the excess IP packets to consume the transit links and the Backbone? Here is my understanding:
You can't unless you CAR on all ingress interfaces on your network toward the customers... so:
Ingress-Provider->RTA->RTBB->RTB->Customers
You need to CAR on all 'Ingress-Provider' links, this is a very sticky problem (obviously)
-For TCP traffics (HTTP, FTP), TCP senders will stop sending packets
when
the TCP windows threshold is reached. -For UDP based audio/video trafffics, if the applications use RTSP and H.323, RTCP/H.245 will signal the sender to slowdown the transmission if
limit the
receiver lost packets.
Did I miss anything? How about UDP traffics that are not using RTSP/H.323?
Thanks.
Suan "Ken" Yeo Network Engineer Aurum Technology ken.yeo@aurumtechnology.com
At 11:20 AM 4/18/2002 -0500, Ken Yeo wrote:
-For UDP based audio/video trafffics, if the applications use RTSP and H.323, RTCP/H.245 will signal the sender to slowdown the transmission if the receiver lost packets.
No, that won't happen. Much real-time voice and video traffic is constant bit-rate, though there are CODECs that offer variable bit rates within an envelope of min/max bit rates. When packet loss is encountered, the equipment will usually try error concealment to deal with the lost data -- how sophisticated this is depends on the equipment. Re-sending the lost data is pointless because it will arrive too late. More importantly, the sender will not slow down its transmission. It can't -- at any instant, there's a fixed amount of bandwidth required to carry the real-time voice and video, and there's no way to magically use less bandwidth. Buffering options are limited because the stream is real time and end-to-end latency must be bounded. And if the required bandwidth of the stream is greater than the available bandwidth over a long period of time (long in this case means seconds), no amount of buffering can help you. You need to abandon real-time and switch to a store-and-forward paradigm. For example, a G.729a encoded real-time voice call requires 8Kbit/sec constant bit rate (excluding IP and RTP packet headers). If it can't get 8Kbit/sec, the user gets crappy voice quality. Some streaming systems (non real-time -- the content is stored for on-demand viewing) encode the video/audio at several different data rates and try to guess the available bandwidth for a customer's connection. This process typically happens at the start of the streaming session, However, automatically switching to a lower rate later is hard to do, so it's rare to see it implemented. But none of this can be done for real-time voice or video.
Did I miss anything? How about UDP traffics that are not using RTSP/H.323?
H.323 is all about call signalling. It doesn't have anything to do with congestion management. Also, I think you mean H.323 (optionally including H.245) paired up with RTP -- this is the usual combination for VoIP calls. RTSP is typically used for streamed content. Other real-time UDP protocols (protocols used by games such as Half Life, for example) typically use the same kind of concepts and techniques that are in RTSP to detect packet loss so that they can conceal it or react in some other way. But if they're carrying a real-time service that has minimum bandwidth requirements, then there's no way for the sender to slow down. Cheers, Mathew | Mathew Lodge | mathew@cplane.com | | Director, Product Management | Ph: +1 408 789 4068 | | CPLANE, Inc. | http://www.cplane.com |
At 11:20 AM 4/18/2002 -0500, Ken Yeo wrote:
-For UDP based audio/video trafffics, if the applications use RTSP and H.323, RTCP/H.245 will signal the sender to slowdown the transmission if
receiver lost packets.
No, that won't happen. Much real-time voice and video traffic is constant bit-rate, though there are CODECs that offer variable bit rates within an envelope of min/max bit rates. When packet loss is encountered, the equipment will usually try error concealment to deal with the lost data -- how sophisticated this is depends on the equipment. Re-sending the lost data is pointless because it will arrive too late.
More importantly, the sender will not slow down its transmission. It can't -- at any instant, there's a fixed amount of bandwidth required to carry the real-time voice and video, and there's no way to magically use less bandwidth. Buffering options are limited because the stream is real time and end-to-end latency must be bounded. And if the required bandwidth of the stream is greater than the available bandwidth over a long period of time (long in this case means seconds), no amount of buffering can help you. You need to abandon real-time and switch to a store-and-forward
Hi Mathew, Thanks for the explaination. Based on your email, if CAR is applied at router B to rate limit inbound and outbound traffics at 128kbps but the realtime video takes 512kbps, the traffics will transverse from upstream-->backbone and CAR will drop packets at 384kbps in router B. At that time, I guess the RTSP or H.323 application will error out and stop requesting traffics? So in order to use CAR to rate limit traffics to customers, we need to apply CAR at ingress and egress routers. How is everyone else doing that? Suan "Ken" Yeo Network Engineer Aurum Technology ken.yeo@aurumtechnology.com ----- Original Message ----- From: "Mathew Lodge" <mathew@cplane.com> To: "Ken Yeo" <kenyeo@on-linecorp.com>; <nanog@merit.edu> Sent: Thursday, April 18, 2002 12:23 PM Subject: Re: CAR the paradigm.
For example, a G.729a encoded real-time voice call requires 8Kbit/sec constant bit rate (excluding IP and RTP packet headers). If it can't get 8Kbit/sec, the user gets crappy voice quality.
Some streaming systems (non real-time -- the content is stored for on-demand viewing) encode the video/audio at several different data rates and try to guess the available bandwidth for a customer's connection. This process typically happens at the start of the streaming session, However, automatically switching to a lower rate later is hard to do, so it's rare to see it implemented. But none of this can be done for real-time voice or video.
Did I miss anything? How about UDP traffics that are not using
RTSP/H.323?
H.323 is all about call signalling. It doesn't have anything to do with congestion management. Also, I think you mean H.323 (optionally including H.245) paired up with RTP -- this is the usual combination for VoIP calls. RTSP is typically used for streamed content.
Other real-time UDP protocols (protocols used by games such as Half Life, for example) typically use the same kind of concepts and techniques that are in RTSP to detect packet loss so that they can conceal it or react in some other way. But if they're carrying a real-time service that has minimum bandwidth requirements, then there's no way for the sender to slow down.
Cheers,
Mathew
| Mathew Lodge | mathew@cplane.com | | Director, Product Management | Ph: +1 408 789 4068 | | CPLANE, Inc. | http://www.cplane.com |
At 05:09 PM 4/18/2002 -0500, Ken Yeo wrote:
Thanks for the explaination. Based on your email, if CAR is applied at router B to rate limit inbound and outbound traffics at 128kbps but the realtime video takes 512kbps, the traffics will transverse from upstream-->backbone and CAR will drop packets at 384kbps in router B. At that time, I guess the RTSP or H.323 application will error out and stop requesting traffics?
Yes, either the user will stop the application or it will time out.
So in order to use CAR to rate limit traffics to customers, we need to apply CAR at ingress and egress routers. How is everyone else doing that?
Most traffic ends up getting dropped at the lowest bandwidth link -- in your case, the link from router B to the customer. While applying CAR on this link will give you more control, the link queueing discpline would normally take care of implementing a drop policy that tries to be fair in some way. For links 2MB and under, WFQ works nicely on Cisco boxes and will at least be kinder to TCP sessions. The reason that this is normally not a problem (and why CAR on Router A is probably a lot of trouble for nothing) is that TCP flows will automatically back off to the maximum end-to-end bandwidth they can get, and UDP apps that try to send too much traffic get shut down (because they don't work) or give up. Any other large traffic flow that persists for a long time and overwhelms the customer's link is probably a DoS attack. Regards, Mathew ----- Original Message -----
From: "Mathew Lodge" <mathew@cplane.com> To: "Ken Yeo" <kenyeo@on-linecorp.com>; <nanog@merit.edu> Sent: Thursday, April 18, 2002 12:23 PM Subject: Re: CAR
-For UDP based audio/video trafffics, if the applications use RTSP and H.323, RTCP/H.245 will signal the sender to slowdown the transmission if
At 11:20 AM 4/18/2002 -0500, Ken Yeo wrote: the
receiver lost packets.
No, that won't happen. Much real-time voice and video traffic is constant bit-rate, though there are CODECs that offer variable bit rates within an envelope of min/max bit rates. When packet loss is encountered, the equipment will usually try error concealment to deal with the lost data -- how sophisticated this is depends on the equipment. Re-sending the lost data is pointless because it will arrive too late.
More importantly, the sender will not slow down its transmission. It can't -- at any instant, there's a fixed amount of bandwidth required to carry the real-time voice and video, and there's no way to magically use less bandwidth. Buffering options are limited because the stream is real time and end-to-end latency must be bounded. And if the required bandwidth of the stream is greater than the available bandwidth over a long period of time (long in this case means seconds), no amount of buffering can help you. You need to abandon real-time and switch to a store-and-forward paradigm.
For example, a G.729a encoded real-time voice call requires 8Kbit/sec constant bit rate (excluding IP and RTP packet headers). If it can't get 8Kbit/sec, the user gets crappy voice quality.
Some streaming systems (non real-time -- the content is stored for on-demand viewing) encode the video/audio at several different data rates and try to guess the available bandwidth for a customer's connection. This process typically happens at the start of the streaming session, However, automatically switching to a lower rate later is hard to do, so it's rare to see it implemented. But none of this can be done for real-time voice or video.
Did I miss anything? How about UDP traffics that are not using RTSP/H.323?
H.323 is all about call signalling. It doesn't have anything to do with congestion management. Also, I think you mean H.323 (optionally including H.245) paired up with RTP -- this is the usual combination for VoIP calls. RTSP is typically used for streamed content.
Other real-time UDP protocols (protocols used by games such as Half Life, for example) typically use the same kind of concepts and techniques that are in RTSP to detect packet loss so that they can conceal it or react in some other way. But if they're carrying a real-time service that has minimum bandwidth requirements, then there's no way for the sender to slow down.
Cheers,
Mathew
| Mathew Lodge | mathew@cplane.com | | Director, Product Management | Ph: +1 408 789 4068 | | CPLANE, Inc. | http://www.cplane.com |
| Mathew Lodge | mathew@cplane.com | | Director, Product Management | Ph: +1 408 789 4068 | | CPLANE, Inc. | http://www.cplane.com |
participants (3)
-
Christopher L. Morrow
-
Ken Yeo
-
Mathew Lodge