On 5 Nov 2015 21:50, "Eric Dugas" <edugas@unknowndevice.ca> wrote:
Hello NANOG,
We've been dealing with an interesting throughput issue with one of our carrier. Specs and topology:
100Mbps EPL, fiber from a national carrier. We do MPLS to the CPE
providing
a VRF circuit to our customer back to our data center through our MPLS network. Circuit has 75 ms of latency since it's around 5000km.
Linux test machine in customer's VRF <-> SRX100 <-> Carrier CPE (Cisco 2960G) <-> Carrier's MPLS network <-> NNI - MX80 <-> Our MPLS network <-> Terminating edge - MX80 <-> Distribution switch - EX3300 <-> Linux test machine in customer's VRF
We can full the link in UDP traffic with iperf but with TCP, we can reach 80-90% and then the traffic drops to 50% and slowly increase up to 90%.
Any one have dealt with this kind of problem in the past? We've tested by forcing ports to 100-FD at both ends, policing the circuit on our side, called the carrier and escalated to L2/L3 support. They tried to also police the circuit but as far as I know, they didn't modify anything else. I've told our support to make them look for underrun errors on their Cisco switch and they can see some. They're pretty much in the same boat as us and they're not sure where to look at.
Thanks Eric
Hi Eric, Sounds like a TCP problem off the top of my head, however just throwing it out there, we use a mix of wholesale access circuit providers and carriers for locations we haven't PoP'ed and we are an LLU provider (CLEC in US terms). For such issues I have been developing an app to test below TCP/UDP and for pseudowires testing etc: https://github.com/jwbensley/Etherate It may or may not shed some light when you have an underlying problem (although yours sounds TCP related). Cheers, James.