PSA to people running transit networks. a) During congestion you are not buffering just the exceeding traffic, you will delay every packet in the class for duration of congestion b) Adding buffering does not increase RX rate during persistent congestion, it only increases delay c) Occasional persistent congestion is normal, because how we've modeled economics of transit d) Typical device transit network operates can add >100ms latency on a single link, but you don't want more than 5ms latency on BB link Fix for IOS-XR: class BE bandwidth percent 50 queue-limit 5 ms Fix for Junos: BE { transmit-rate percent 50; buffer-size temporal 5k; } The actual byte value programmed is interface_rate * percent_share * time. If your class is by design out-of-contract, that means your rate is actually higher, which means the programmed buffer byte value results in smaller queueing delay. The configured byte value will only result in configured queueing delay when actual rate == g-rate. The buffers are not large to facilitate buffering single queue for 100ms, the buffers are large to support configurations of large amount of logical interfaces each with large number of queues. If you are configuring just few queues, assumption is that you are dimensoning your buffer sizes. Hopefully this motivates some networks to limit buffer sizes. Thanks! On Tue, Mar 12, 2019 at 9:32 AM Phil Lavin <phil.lavin@cloudcall.com> wrote:
We’re seeing consistent +100ms latency increases to Verizon customers in Pennsylvania, during peak business hours for the past couple of weeks.
If someone is able to assist, could they please contact me off-list?
-- ++ytti