Does anybody have experience having a T1 circuit with PPP encapsulation getting only 1290 Kbps maximum throughput looking at "sh int" result from cisco router or MRTG ? This is the explanation our upstream provider gave us: You have a 1.536Mbps port. However, there is the overhead from PPP and the translation overhead which takes place in all circuits. Judging by your settings that limit ends up somewhere between 1.3 and 1.4. This overhead would be the non-data portion of cells or frames for example. For example, you might have 1.3 Mbps of data which gets framing or cell information appended onto it before sending taking up additional bandwidth. It is to be expected in all circuits. Thank you for your help. Tony S. Hariman http://www.tsh.or.id Tel: +62(21)574-2488 tonyha@compuserve.com
"Tony" == Tony S Hariman <tonyh@noc.cbn.net.id> writes:
Tony> This is the explanation our upstream provider gave us: This is complete BS. Here's what one of our (admittedly overloaded) routers is doing right now: 5 minute input rate 1454000 bits/sec, 423 packets/sec Probably, the hardware one end or the other can't keep up. -- Bruce Robertson, President/CEO +1-702-348-7299 Great Basin Internet Services, Inc. fax: +1-702-348-9412 PGP Key fingerprint = 03 2D A8 4A 37 07 FB 53 EB C9 02 5C E5 10 66 F2
"Tony" == Tony S Hariman <tonyh@noc.cbn.net.id> writes:
Tony> You have a 1.536Mbps port. However, there is the overhead Tony> from PPP and the translation overhead which takes place in Tony> all circuits. Further comments... translation overhead affects *latency*, not *throughput*. Overhead in the circuit may delay the leading edge of the bits, but it does not affect the bit rate. PPP framing overhead is close to negligible, and certainly does not account for the missing 246 Kb/s. -- Bruce Robertson, President/CEO +1-702-348-7299 Great Basin Internet Services, Inc. fax: +1-702-348-9412 PGP Key fingerprint = 03 2D A8 4A 37 07 FB 53 EB C9 02 5C E5 10 66 F2
At 10:44 AM 7/9/98 +0700, Tony S. Hariman wrote:
Does anybody have experience having a T1 circuit with PPP encapsulation getting only 1290 Kbps maximum throughput looking at "sh int" result from cisco router or MRTG ?
This is the explanation our upstream provider gave us:
You have a 1.536Mbps port. However, there is the overhead from PPP and the translation overhead which takes place in all circuits. Judging by your settings that limit ends up somewhere between 1.3 and 1.4. This overhead would be the non-data portion of cells or frames for example. For example, you might have 1.3 Mbps of data which gets framing or cell information appended onto it before sending taking up additional bandwidth. It is to be expected in all circuits.
That is.... well, not correct. (Please insert favorite euphemism for "not very smart" in regards to this upstream.) First of all, PPP overhead is not even close to 200 Kbps on a T1. Second of all, I believe MRTG includes the overhead in it's graphs. Could someone please correct me on that if I'm wrong? And lastly, there are no "cells" in PPP. If you are doing ATM over that T1, you would have cells. But you say you are doing PPP, not ATM. Besides, I think you'd get less than 1.3 Mbps - probably more like 1.1 or 1.0. According to my MRTG graphs, the most I've ever gotten on a PPP encapsulated link is 1521.4 kb/s. I think that's a bit higher than your upstream told you is possible. :) This is a Cisco talking to a Bay router (hence the PPP encap as opposed to HDLC). So tell your upstream he's full of it.
Tony S. Hariman
TTFN, patrick ************************************************************** Patrick W. Gilmore voice: +1-650-482-2840 Director of Operations, CCIE #2983 fax: +1-650-482-2844 PRIORI NETWORKS, INC. http://www.priori.net "Tomorrow's Performance.... Today" **************************************************************
One thing you might want to check is that your DSU/CSUs are configured to give you a full clear channel T1 with ESF framing. In the past we've seen this happening with remote sites where the DSU/ CSU wasn't configure for the full 1536K. Hope this helps, -- Jose A. Dominguez, Senior Network Engineer, Network Services Group 225 Office of University Computing, University of Oregon 1225 Kincaid St., Eugene, OR 97403-1212 Voice: (541) 346-1685 / Pager: (541) 683-0365 / Fax : (541) 346-4397 PGP : http://ns.uoregon.edu/~jad/pgp.key / Email: jad@ns.uoregon.edu
Hello everyone, I have a question, here in Hawaii, we only have a few choices for a uplink. One is going through GTE reselling UUNet on a T1 Frame Relay (768k CIR) which takes us on a Point to Point T1 to a channelized DS3 then to the Frame Relay Cascade 9000 switch, then from that switch on a DS3 to another Cascade 9000 switch and from there goes to a Cisco 7513 via DS3 Frame Relay where UUNet's POP is located and from there, it goes out a fractional DS-3 to UUNet in San Francisco. We had tried this method already and after 2 months of UUNet's NOC unable to solve the issue of the T1 working correctly during 8am-6pm in the daytime but during the weekends and 6pm-8am, it will do T1 speeds upstream but only 0.06k/sec on file transfers downstream and halting. The second method is to connect with Oceanic Communications/Time Warner which has a direct OC-3 SONET hub just installed in our facilities this week that uses digital fiber optics bypassing the ILEC's CO with a Full Point to Point T1 to the OC-3 hub that goes to Oceanic's Internet which has three separate InternetMCI T1's going to San Francisco, Denver and Seattle load-balanced as well as a 10Mbps ethernet connection to the Hawaii Internet Exchange peering point for everyone except UUNet, would we be getting full T1 speeds to everyone in the mainland US? I know ANS is also an option but it's $5000 a month and doesn't allow reselling. Cheers, Vince - vince@MCESTATE.COM - vince@GAIANET.NET ________ __ ____ Unix Networking Operations - FreeBSD-Real Unix for Free / / / / | / |[__ ] GaiaNet Corporation - M & C Estate / / / / | / | __] ] Beverly Hills, California USA 90210 / / / / / |/ / | __] ] HongKong Stars/Gravis UltraSound Mailing Lists Admin /_/_/_/_/|___/|_|[____]
"Patrick W. Gilmore" writes:
That is.... well, not correct.
It certainly has incorrect elements, but their response is correct in general -- there exists overhead, in some cases substantial overhead.
PPP overhead is not even close to 200 Kbps on a T1.
Without information on the traffic being handled this statement cannot be supported. I agree that its doubtful.
I believe MRTG includes the overhead in it's graphs.
MRTG has no "smarts" in this regard. Whether or not overhead is included is determined by the agent. The agent is responding according to a standard, private or public, which should, and usually does, specify whether overhead is to be included. Since the MIB being discussed isn't known, to me, neither is the inclusion, or lack of, overhead known.
there are no "cells" in PPP.
Its certainly possible that cells are being used, but their use should not impact the delivered capacity.
According to my MRTG graphs, the most I've ever gotten on a PPP encapsulated link is 1521.4 kb/s.
Thats the sort of number we see as well on a general traffic mix.
mlm@ftel.net (Mark Milhollan) writes:
"Patrick W. Gilmore" writes:
PPP overhead is not even close to 200 Kbps on a T1.
Without information on the traffic being handled this statement cannot be supported. I agree that its doubtful.
Actually, we can probably do better than that. For HDLC, the worst case is all ones user data, which expands 6/5, plus 7 bits per packet of shared flags. The PPP overhead, assuming no header compression, is four bytes of header plus two bytes of CRC per packet. Now, if we assume 256 byte packets (not a bad assumption of average given IP traffic measurements I've seen) with worst case data, the user portion will expand from 2048 to 2458 bits, and the HDLC/PPP overhead adds 55 bits. Thus, our efficiency is .815, or 284Kbps of overhead. With random data, rather than worst-case data, things are much better. The expansion is 161/160 on random data, which means that our 2048 bits of data go as 2061 encoded. The efficiency is .968, or 49Kbps.
there are no "cells" in PPP.
Its certainly possible that cells are being used, but their use should not impact the delivered capacity.
(!) The cell tax is about 10% -- the SAR expands user data from 48 bytes to 53 bytes, plus an additional amount of overhead for internal fragmentation on the final cell (AAL-5), plus overhead for whatever encapsulation mode is being used. On a T1, you'd be lucky to get away with only 154Kbps wasted in cell overhead, let alone any of the L2 stuff. -- James Carlson, Consulting S/W Engineer <carlson@ironbridgenetworks.com> IronBridge Networks / 55 Hayden Avenue 71.246W Vox: +1 781 402 8032 Lexington MA 02421-7996 / USA 42.423N Fax: +1 781 402 8092 "PPP Design and Debugging" --- http://people.ne.mediaone.net/carlson/ppp
On Thu, 9 Jul 1998, Tony S. Hariman wrote:
Does anybody have experience having a T1 circuit with PPP encapsulation getting only 1290 Kbps maximum throughput looking at "sh int" result from cisco router or MRTG ?
Oh, around 1536 :-)
This is the explanation our upstream provider gave us:
You have a 1.536Mbps port. However, there is the overhead from PPP and the translation overhead which takes place in all circuits. Judging by your settings that limit ends up somewhere between 1.3 and 1.4. This overhead would be the non-data portion of cells or frames for example. For example, you might have 1.3 Mbps of data which gets framing or cell information appended onto it before sending taking up additional bandwidth. It is to be expected in all circuits.
Hehe, find a new ISP. There is some overhead with PPP, but it is not close to that amount. Most likely your ISP does not have the bandwidth to deliver all of your bandwidth.
Thank you for your help.
No problem.
<> Nathan Stratton Telecom & ISP Consulting www.robotics.net nathan@robotics.net
Tony S. Hariman http://www.tsh.or.id Tel: +62(21)574-2488 tonyha@compuserve.com
In message <19980709034534.AAA1627@wolfpack>, Tony S. Hariman writes:
Does anybody have experience having a T1 circuit with PPP encapsulation getting only 1290 Kbps maximum throughput looking at "sh int" result from cisco router or MRTG ?
A T1 is capable of achieving 1536 kbps maximum (24 x 64 kbps). typically, the output shown both via snmp interface-counters and from a cisco "show int" includes all of the associated PPP framing. bear in mind that the output of a "show int" by default shows a 5-minute-exponentially-decayed average of the throughput. this will 'smooth' out instantaneous traffic bursts and troughs. the figure reported as a 'kbps' figure is most likely bursting much higher than this.
This is the explanation our upstream provider gave us:
You have a 1.536Mbps port. However, there is the overhead from PPP and the translation overhead which takes place in all circuits. Judging by your settings that limit ends up somewhere between 1.3 and 1.4. This overhead would be the non-data portion of cells or frames for example. For example, you might have 1.3 Mbps of data which gets framing or cell information appended onto it before sending taking up additional bandwidth. It is to be expected in all circuits.
PPP doesn't have much of the overhead of IP-in-ATM-cells (fixed-cell-size, very-badly-chosen-prime-number-cell-size, ..) that the discussion given to you by your upstream is talking about. what kind of traffic are you sending over the link ? perhaps there isn't enough traffic / the traffic isn't of the variety to actually fill the link capacity. if its mostly TCP traffic, don't expect it to fill the whole pipe all the time. cheers, lincoln.
On Thu, Jul 09, 1998 at 03:37:49PM +1000, Lincoln Dale wrote:
In message <19980709034534.AAA1627@wolfpack>, Tony S. Hariman writes:
Does anybody have experience having a T1 circuit with PPP encapsulation getting only 1290 Kbps maximum throughput looking at "sh int" result from cisco router or MRTG ?
A T1 is capable of achieving 1536 kbps maximum (24 x 64 kbps).
While this doesn't seem to apply to Tony's case, I wouldn't make a blanket statements like that. If the T1 is provisioned ESF, yes you can get 1536 kbps, but there are places where you still can only get SF/D4 framing. -dorian
:: Dorian Kim writes ::
While this doesn't seem to apply to Tony's case, I wouldn't make a blanket statements like that. If the T1 is provisioned ESF, yes you can get 1536 kbps, but there are places where you still can only get SF/D4 framing.
Why would SF/D4 framing reduce the available bandwidth? In some cases, if you can't get B8ZS line coding, and have to use AMI, you might have to drop to 24x56=1344kbps (and use one bit on each DS0 to maintin 1's density), but even then, if your routers or DSUs can invert the data, you can still go 24x64=1536kbps. (HDLC guarantees zero-density ... so if you invert it, you get guaranteed 1's density.) - Brett (brettf@netcom.com) ------------------------------------------------------------------------------ ... Coming soon to a | Brett Frankenberger .sig near you ... a Humorous Quote ... | brettf@netcom.com
From: Dorian Kim <dorian@blackrose.org>
A T1 is capable of achieving 1536 kbps maximum (24 x 64 kbps).
While this doesn't seem to apply to Tony's case, I wouldn't make a blanket statements like that. If the T1 is provisioned ESF, yes you can get 1536 kbps, but there are places where you still can only get SF/D4 framing. Precisely. The most likely explanation is that the T1 is actually D4 framed, which gives it a throughput of (24 * 56kbps) == 1344k. Or the T1 may be properly provisioned but the CSU/DSUs incorrectly configured for the old-fashioned framing (I think I saw this working at one point, can't remember for sure). The PPP is hardly eating anything at all in the grand scheme of things. ---Rob
From: "Robert E. Seastrom" <rs@bifrost.seastrom.com> Precisely. The most likely explanation is that the T1 is actually D4 framed, uh, my wrong... it's the AMI not the D4 that causes the big hit. of course, as a matter of course, D4/AMI and ESF/B8ZS go together and you never see combinations like D4/B8ZS... which, in answer to the fellow who asked, is why you can't just invert the data on the HDLC and run it down the line and get your 12% back... HDLC is layer 2, whilst framing is layer 1... ---Rob
:: Robert E. Seastrom writes ::
uh, my wrong... it's the AMI not the D4 that causes the big hit. of course, as a matter of course, D4/AMI and ESF/B8ZS go together and you never see combinations like D4/B8ZS...
My employer runs a few hundred T1s at ESF/AMI, because we like ESF better, and can't run B8ZS because our old repeaters don't like it. (The repeaters don't even understand framing, so I can run D4, ESF, or anything else I want.) I also know of cases in carriers where D4/B8ZS is run because it's their standard to do everything internally at B8ZS (line coding doesn't traverse higher-order spans, so if the signal is going inside a DS3, for example, you can have B8ZS on one end and AMI on the other), but the customer ordered D4. (Usuall D4 with AMI, but if the customer wants D4 with B8ZS, they can have that also. Telcos will also sell ESF/AMI.)
which, in answer to the fellow who asked, is why you can't just invert the data on the HDLC and run it down the line and get your 12% back... HDLC is layer 2, whilst framing is layer 1...
Know what one of my pet peeves is? People telling me that I can't do things that I have been doing in production for years. Buy yourself a couple of DSU/CSUs, strap them for AMI, 24x64, data inverted, connect up a router, and try to make 'em lose sync. If you're running AMI, the data you present to the T1 has to maintain ones density some how. It so happens that if you run HDLC, HDLC guarantees 0's density. So if you invert the HDLC, you get guaranteed 1's density (considerably higher density than required, even). So, if you present inverted HDLC to an AMI T1, you will have the required 1's density. The differing layers are irrelevant technobabble, but if you insist: Inverted Data with AMI framing at layer 1 presents an interface to layer 2 that requires a certain 0's density. If layer 2 is HDLC, that 0's density will be met. It was pointed out in response to my original posting that running inverted HDLC over D4 at 24x64 can cause spurious yellow alarms, because anytime you get a stream of flags, there's a 75% change that the result will be 0's in bit 2 for all DS0s for long enough to trigger a yellow. (24x56 avoids this, regardless of whether the HDLC is inverted or not, because by having only 7 bits available per channel, the flag patter is effectively shifted one bit for each channel. It's still possible to send a string of data that will trigger a false yellow, but it's much, much less likely). This isn't an issue over ESF/AMI, of course, since the yelow alarm code isnt' send as part of the normal paylod. And in some cases of D4/AMI, you might not care about the yellow. - Brett (brettf@netcom.com) ------------------------------------------------------------------------------ ... Coming soon to a | Brett Frankenberger .sig near you ... a Humorous Quote ... | brettf@netcom.com
It also depends on whether the line encoding is AMI or B8ZS. If it is AMI, you'll start out with 1340k, not 1536k, if I'm not mistaken. Then you can factor in any type of Layer2 or Layer3 encapsulation overhead. - paul At 10:44 AM 7/9/98 +0700, Tony S. Hariman wrote:
You have a 1.536Mbps port. However, there is the overhead from PPP and the translation overhead which takes place in all circuits. Judging by your settings that limit ends up somewhere between 1.3 and 1.4. This overhead would be the non-data portion of cells or frames for example. For example, you might have 1.3 Mbps of data which gets framing or cell information appended onto it before sending taking up additional bandwidth. It is to be expected in all circuits.
ferguson@cisco.com (Paul Ferguson) writes:
It also depends on whether the line encoding is AMI or B8ZS. If it is AMI, you'll start out with 1340k, not 1536k, if I'm not mistaken.
AMI with 24 channels is 1344000bps (56000 by 24). (A continuing crisis with many LECs is that data links are set up as AMI/D4, when there's usually no reason not to run B8ZS/ESF. When our drop was put in, the technician installed the B8ZS/ESF line, then proceeded to configure the CSU/DSU for AMI/D4. Sigh.) -- James Carlson, Consulting S/W Engineer <carlson@ironbridgenetworks.com> IronBridge Networks / 55 Hayden Avenue 71.246W Vox: +1 781 402 8032 Lexington MA 02421-7996 / USA 42.423N Fax: +1 781 402 8092 "PPP Design and Debugging" --- http://people.ne.mediaone.net/carlson/ppp
participants (13)
-
Brett Frankenberger
-
Bruce Robertson
-
Dorian Kim
-
James Carlson
-
Jose Dominguez
-
Lincoln Dale
-
Mark Milhollan
-
Nathan Stratton
-
Patrick W. Gilmore
-
Paul Ferguson
-
Robert E. Seastrom
-
tonyh@noc.cbn.net.id
-
Vincent Poy