Cisco PPP DS-3 limitations - 42.9Mbpbs?
At a recent consulting opportunity, a client was only getting 40.9Mbps on their DS-3 (clearchannel DS3 over OC12 to the IP provider). (PA-2T3 to PA-2T3, unloaded CPU, drops occur when peak octets out per sec/8=40.9M) The client asked us why 40.9 was less than 44.210. The IP provider indicated that "Cisco told us the maximum theoretical rate of an interface doing DS-3 with PPP encapsulation is 42.9." I can't find documentation to the 42.9. I can't find documentation that indicates that the PPP overhead is somehow magically excluded from the IfOctetsIn and Out... I can't find documentation as to why 40.9 is the observed rate. Frankly, all my DS-3s (HDLC encap) go up to 44.210, so I was hoping I could share in the collective wisdom of NANOG readers who have DS-3s to IP providers, and can provide real observed feedback. Please reply privately. Please let me know privately if you want a summary. Thanks! Ehud Gavron gavron@wetwork.net
we run HDLC on DS3 and we max at about 40.something too not sure why, i just assume theres either overhead in the frames somewhere or that the cards arent designed to run at line speed we run HDLC, c-bit and CRC16 on PA-2T3+.. be interested if anyone knows how to get the missing 10% :) Steve On Wed, 20 Feb 2002, Ehud Gavron wrote:
At a recent consulting opportunity, a client was only getting 40.9Mbps on their DS-3 (clearchannel DS3 over OC12 to the IP provider). (PA-2T3 to PA-2T3, unloaded CPU, drops occur when peak octets out per sec/8=40.9M)
The client asked us why 40.9 was less than 44.210.
The IP provider indicated that "Cisco told us the maximum theoretical rate of an interface doing DS-3 with PPP encapsulation is 42.9."
I can't find documentation to the 42.9. I can't find documentation that indicates that the PPP overhead is somehow magically excluded from the IfOctetsIn and Out... I can't find documentation as to why 40.9 is the observed rate.
Frankly, all my DS-3s (HDLC encap) go up to 44.210, so I was hoping I could share in the collective wisdom of NANOG readers who have DS-3s to IP providers, and can provide real observed feedback.
Please reply privately. Please let me know privately if you want a summary. Thanks!
Ehud Gavron gavron@wetwork.net
-- Stephen J. Wilcox IP Services Manager, Opal Telecom http://www.opaltelecom.co.uk/ Tel: 0161 222 2000 Fax: 0161 222 2008
we run HDLC on DS3 and we max at about 40.something too
if you're using five min (or three min) samples, and you're seeing 70%, peaks are likely much higher and some users' packets are being dropped. by 80%, enough packets are being dropped that users are likely to see the effects of exponential backoff. things do not improve above 80%. randy
Hmm, reasonable explanation.. presumably this can be improved (a little) with increased interface buffers.. ? and possibly non fifo queuing eg custom queuing in favour of TCP rathen than UDP/ICMP etc which wont have the backoffs Cheers Steve On Wed, 20 Feb 2002, Randy Bush wrote:
we run HDLC on DS3 and we max at about 40.something too
if you're using five min (or three min) samples, and you're seeing 70%, peaks are likely much higher and some users' packets are being dropped. by 80%, enough packets are being dropped that users are likely to see the effects of exponential backoff. things do not improve above 80%.
randy
-- Stephen J. Wilcox IP Services Manager, Opal Telecom http://www.opaltelecom.co.uk/ Tel: 0161 222 2000 Fax: 0161 222 2008
If you are just doing testing, then sending a stream of UDP echos across, using something like a Digital Lightwave is the easiest way of testing, provided your target system will take the traffic (this is a good way to crash most windows boxes, for example). PPP and HDLC both have very little in the way of overhead. The same can be said for c-bit framing. You should be getting something like 96% efficiency, which gets you to the 42.9mb/sec number. PPP has something like 7 bytes of overhead per frame. Various forms of DS3 framing are discussed here: http://www.dl.com/ResearchCenter/Training/t3/t3fund.pdf. ATM, for comparison, will usually max at around 36mb/sec for a DS3. YMMV of course, and in actual operation, due to peaks and 5 minute sampling, you will see lower bit rates. - Daniel Golding
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu]On Behalf Of Stephen J. Wilcox Sent: Wednesday, February 20, 2002 11:09 AM To: Randy Bush Cc: nanog@merit.edu Subject: Re: Cisco PPP DS-3 limitations - 42.9Mbpbs?
Hmm, reasonable explanation..
presumably this can be improved (a little) with increased interface buffers.. ? and possibly non fifo queuing eg custom queuing in favour of TCP rathen than UDP/ICMP etc which wont have the backoffs
Cheers
Steve
On Wed, 20 Feb 2002, Randy Bush wrote:
we run HDLC on DS3 and we max at about 40.something too
if you're using five min (or three min) samples, and you're seeing 70%, peaks are likely much higher and some users' packets are being dropped. by 80%, enough packets are being dropped that users are likely to see the effects of exponential backoff. things do not improve above 80%.
randy
-- Stephen J. Wilcox IP Services Manager, Opal Telecom http://www.opaltelecom.co.uk/ Tel: 0161 222 2000 Fax: 0161 222 2008
is there an ios command you know of that does this? similar to clock rate? light rate maybe? (config-if)#light rate ^ % Invalid input detected at '^' marker. perhaps I need an IOS upgrade? Steve On Wed, 20 Feb 2002, Randy Bush wrote:
presumably this can be improved (a little) with increased interface buffers.. ? and possibly non fifo queuing eg custom queuing in favour of TCP rathen than UDP/ICMP etc which wont have the backoffs
and increasing the speed of light
On Wed, 20 Feb 2002, Randy Bush wrote:
presumably this can be improved (a little) with increased interface buffers.. ? and possibly non fifo queuing eg custom queuing in favour of TCP rathen than UDP/ICMP etc which wont have the backoffs and increasing the speed of light
I can't find that option in my management client. What MIB is that OID described in? Thanks.
presumably this can be improved (a little) with increased interface buffers.. ? and possibly non fifo queuing eg custom queuing in favour of TCP rathen than UDP/ICMP etc which wont have the backoffs and increasing the speed of light I can't find that option in my management client. What MIB is that OID described in?
i begin to suspect folk don't get it. so i'll try once more. you can turn on [x]red. it will make a better choice of which packets to drop. but packets will still be dropped [0]. you can increase buffers blah blah blah. but twenty tomatoes will still not fit in a fifteen tomato can. randy --- [0] - corollary: qos mechanisims decide which packets to drop. but isps are paid not to drop any packets.
BTW: 30 second input rate 13039000 bits/sec, 8055 packets/sec 30 second output rate 45531000 bits/sec, 10021 packets/sec Thats a pa-2t3+ on a flexwan in a 6509. On Wed, 20 Feb 2002, Stephen J. Wilcox wrote:
Hmm, reasonable explanation..
presumably this can be improved (a little) with increased interface buffers.. ? and possibly non fifo queuing eg custom queuing in favour of TCP rathen than UDP/ICMP etc which wont have the backoffs
Cheers
Steve
On Wed, 20 Feb 2002, Randy Bush wrote:
we run HDLC on DS3 and we max at about 40.something too
if you're using five min (or three min) samples, and you're seeing 70%, peaks are likely much higher and some users' packets are being dropped. by 80%, enough packets are being dropped that users are likely to see the effects of exponential backoff. things do not improve above 80%.
randy
-- Stephen J. Wilcox IP Services Manager, Opal Telecom http://www.opaltelecom.co.uk/ Tel: 0161 222 2000 Fax: 0161 222 2008
-- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben -- -- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
OMG! Arent we missing the point here? What about never running links above 60% or so to allow for bursts against the 5 min average, and <shudder> upgrading or adding capacity when we get too little headroom. And here we are, nickel and diming over a few MBps near to 45M on a DS3... .... Or has the world changed so much that saturated pipes are The Way Things Are TM now. jm On Wednesday, February 20, 2002, at 09:37 AM, Alex Rubenstein wrote:
BTW:
30 second input rate 13039000 bits/sec, 8055 packets/sec 30 second output rate 45531000 bits/sec, 10021 packets/sec
Thats a pa-2t3+ on a flexwan in a 6509.
On Wed, 20 Feb 2002, Stephen J. Wilcox wrote:
Hmm, reasonable explanation..
presumably this can be improved (a little) with increased interface buffers.. ? and possibly non fifo queuing eg custom queuing in favour of TCP rathen than UDP/ICMP etc which wont have the backoffs
Cheers
Steve
On Wed, 20 Feb 2002, Randy Bush wrote:
we run HDLC on DS3 and we max at about 40.something too
if you're using five min (or three min) samples, and you're seeing 70%, peaks are likely much higher and some users' packets are being dropped. by 80%, enough packets are being dropped that users are likely to see the effects of exponential backoff. things do not improve above 80%.
randy
-- Stephen J. Wilcox IP Services Manager, Opal Telecom http://www.opaltelecom.co.uk/ Tel: 0161 222 2000 Fax: 0161 222 2008
-- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben -- -- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
On Wed, 20 Feb 2002, Jon Mansey wrote:
OMG! Arent we missing the point here? What about never running links above 60% or so to allow for bursts against the 5 min average, and <shudder> upgrading or adding capacity when we get too little headroom.
And here we are, nickel and diming over a few MBps near to 45M on a DS3...
And why not? Obviously there is a reason why they're not upgrading, because there is plenty of traffic to fill up a second or faster circuit if packets are being dropped because of congestion. (Which has not been confirmed so far.) There shouldn't be any problems pushing a DS3 well beyond 99% utilization, by the way. With an average packet size of 500 bytes and 98 packets in the output queue on average, 99% only introduces a 9 ms delay. The extra RTT will also slow TCP down, but not in such a brutal way as significant numbers of lost packets will. Just use a queue size of 500 or so, and enable (W)RED to throttle back TCP when there are large bursts.
On Wed, 20 Feb 2002, Jon Mansey wrote:
OMG! Arent we missing the point here? What about never running links above 60% or so to allow for bursts against the 5 min average, and <shudder> upgrading or adding capacity when we get too little headroom.
And here we are, nickel and diming over a few MBps near to 45M on a DS3...
And why not? Obviously there is a reason why they're not upgrading, because there is plenty of traffic to fill up a second or faster circuit if packets are being dropped because of congestion. (Which has not been confirmed so far.)
There shouldn't be any problems pushing a DS3 well beyond 99% utilization, by the way. With an average packet size of 500 bytes and 98 packets in the output queue on average, 99% only introduces a 9 ms delay. The extra RTT will also slow TCP down, but not in such a brutal way as significant numbers of lost packets will. Just use a queue size of 500 or so, and enable (W)RED to throttle back TCP when there are large bursts.
One problem you have here is how you are getting the utilization statistics. Since you are looking at 5 minute averages, chances are real good that your instantaneous figure is probably at the full capacity of the DS-3. As you approach the maximum capacity and start dropping packets, your throughput on the line will bounce around near the 45 megs figure but the "goodput" that the customer sees will drop dramatically. You will be sending retransmissions of the dropped packet, which then causes less bandwidth to be available for current traffic, which causes more drops, which causes more retransmissions and backoffs. My experience in the real world is that once you get over 40 mbps on a DS-3 you need to look at upgrading. Another thing I would question is where the Cisco is counting traffic. Since it is most likely looking at the real user traffic there is more likely some overhead to manage the DS-3 itself and some L2 protocol stuff also. This is definitely a factor when running ATM because the router only counts the input/output traffic after the ATM overhead has been stripped away unless you are looking at the controller and not the interface. Remember the interface is just that, an interface to the line itself. You may be able to get more accurate data by looking at the controller stats because they operate at a lower level than the interface. Steven Naslund Network Engineering Manager Hosting.com - Chicago
On Wed, 20 Feb 2002, Steve Naslund wrote:
There shouldn't be any problems pushing a DS3 well beyond 99% utilization, by the way. With an average packet size of 500 bytes and 98 packets in the output queue on average, 99% only introduces a 9 ms delay. The extra RTT will also slow TCP down, but not in such a brutal way as significant numbers of lost packets will. Just use a queue size of 500 or so, and enable (W)RED to throttle back TCP when there are large bursts.
One problem you have here is how you are getting the utilization statistics. Since you are looking at 5 minute averages, chances are real good that your instantaneous figure is probably at the full capacity of the DS-3.
Of course. 99% utilization means the circuit is busy 297 seconds of every 300 second period (not that Cisco's figures are computed like that).
As you approach the maximum capacity and start dropping packets, your throughput on the line will bounce around near the 45 megs figure but the "goodput" that the customer sees will drop dramatically. You will be sending retransmissions of the dropped packet, which then causes less bandwidth to be available for current traffic,
Yes, this is exactly what happens with the default 40 packet output queue. But if you increase the queue, excess packets won't be dropped, but buffered and transmitted after a slight delay. No problems with retransmissions and backoffs, and the circuit is used very efficiently. The only problem is that bulk transfers don't care about the increasing RTT and keep sending, while interactive traffic suffers as the queue size grows. So you need RED as well to keep the bulk transfers in check.
My experience in the real world is that once you get over 40 mbps on a DS-3 you need to look at upgrading.
Agree, but that's not always (immediately) possible.
Another thing I would question is where the Cisco is counting traffic. Since it is most likely looking at the real user traffic there is more likely some overhead to manage the DS-3 itself and some L2 protocol stuff also.
My experience with heavily congested <= 2 Mbps circuits is that the kbps figure the router shows gets _very_ close to the actual number of bits per second available to layer 2. So they must be doing this the right way, including PPP overhead and even bit stuffing, which is obviously something you wouldn't want to count in software.
This is definitely a factor when running ATM because the router only counts the input/output traffic after the ATM overhead has been stripped away
Looks that way. But only the ATM cell headers or also the AAL5 and RFC 1577 (or what have you) overhead? Iljitsch van Beijnum
Thus spake "Randy Bush" <randy@psg.com>
if you're using five min (or three min) samples, and you're seeing 70%, peaks are likely much higher and some users' packets are being dropped. by 80%, enough packets are being dropped that users are likely to see the effects of exponential backoff. things do not improve above 80%.
One should note that any utilization up to 59.8% is, on average, indistinguishable from an empty line. 70% = 1.6x delay, 80% = 3.2x, 90% = 8.1x, and 95% = 18.05x. Of course, once you figure in finite buffering, anything past 59.8% is likely to be dropping packets. ObMath: Plot r^2/(1-r). Where the derivative exceeds one (r~0.598), delay increases faster than traffic rate. Assumes random arrival times. S
Try enabling W/RED and use load-interval 30 Regards, Neil.
we run HDLC on DS3 and we max at about 40.something too
not sure why, i just assume theres either overhead in the frames somewhere or that the cards arent designed to run at line speed
we run HDLC, c-bit and CRC16 on PA-2T3+.. be interested if anyone knows how to get the missing 10% :)
participants (11)
-
Alex Rubenstein
-
Daniel Golding
-
Ehud Gavron
-
Greg Maxwell
-
Iljitsch van Beijnum
-
Jon Mansey
-
neil@DOMINO.ORG
-
Randy Bush
-
Stephen J. Wilcox
-
Stephen Sprunk
-
Steve Naslund