Denys Fedoryshchenko wrote on 12/20/2017 11:38 AM:
On 2017-12-20 19:16, Blake Hudson wrote:
Denys Fedoryshchenko wrote on 12/20/2017 8:55 AM:
National operator here ask customers to distribute bandwidth between all ip's equally, e.g. if i have /22, and i have in it CDN from one of the big content providers, this CDN use only 3 ips for ingress bandwidth, so bandwidth distribution is not equal between ips and i am not able to use all my bandwidth.
And for me, it sounds like faulty aggregation + shaping setup, for example, i heard once if i do policing on some models of Cisco switch, on an aggregated interface, if it has 4 interfaces it will install 25% policer on each interface and if hashing is done by dst ip only, i will face such issue, but that is old and cheap model, as i recall.
Did anybody in the world face such requirements? Is such requirements can be considered as legit?
Not being able to use all of your bandwidth is a common issue if you are provided a bonded connection (aka Link Aggregation Group). For example, you are provided a 4Gbps service over 4x1Gbps ethernet links. Ethernet traffic is not typically balanced across links per frame, because this could lead to out of order delivery or jitter, especially in cases where the links have different physical characteristics. Instead, a hashing algorithm is typically used to distribute traffic based on flows. This results in each flow having consistent packet order and latency characteristics, but does force a flow over a single link, resulting in the flow being limited to the performance of that link. In this context, flows can be based on src/dst MAC address, IP address, or TCP/UDP port information, depending on the traffic type (some IP traffic is not TCP/UDP and won't have a port) and equipment type (layer 3 devices typically hash by layer 3 or 4 info).
Your operator may be able to choose an alternative hashing algorithm that could work better for you (hashing based on layer 4 information instead of layer 3 or 2, for example). This is highly dependent on your provider's equipment and configuration - it may be a global option on the equipment or may not be an option at all. Bottom line, if you expected 4Gbps performance for each host on your network, you're unlikely to get it on service delivered through 4x 1Gbps links. 10Gbps+ links between you and your ISP's peers would better serve those needs (any 1Gbps bonds in the path between you and your provider's edge are likely to exhibit the same characteristics).
--Blake
No bonding to me, usually it is dedicated 1G/10G/etc link. Also i simulated this bandwidth for "hashability", and any layer4 aware hashing on cisco/juniper provided perfectly balanced bandwidth distribution. On my tests i can see that they have some balancing clearly by dst ip only.
Are you claiming that your bandwidth is being equally divided 1024 ways (you mentioned a /22) or just that each host (IP) is not receiving the full bandwidth? What is the bandwidth ordered and what is the bandwidth you're seeing per host(IP)?
<<skipped>>
Are you claiming that your bandwidth is being equally divided 1024 ways (you mentioned a /22) or just that each host (IP) is not receiving the full bandwidth? What is the bandwidth ordered and what is the bandwidth you're seeing per host(IP)?
Some facts from today. Ordered capacity 3.3Gbit Received capacity ~2.1Gbit if they apply bandwidth limit In this example they removed limit, but you can have approximate picture how bandwidth is distributed (top 20 ips): [x.x.x.14 ] 22433902b 36435p avg 615b 0.81%b 1.10%p 26596 Kbit/s [x.x.x.13 ] 22715108b 34887p avg 651b 0.82%b 1.06%p 26929 Kbit/s [x.x.x.10 ] 22741911b 31719p avg 716b 0.83%b 0.96%p 26961 Kbit/s [x.x.x.11 ] 23874482b 34157p avg 698b 0.87%b 1.04%p 28304 Kbit/s [x.x.x.15 ] 24393258b 29622p avg 823b 0.89%b 0.90%p 28919 Kbit/s [x.x.x.12 ] 24715746b 33880p avg 729b 0.90%b 1.03%p 29301 Kbit/s [x.x.x.9 ] 25720774b 36000p avg 714b 0.93%b 1.09%p 30492 Kbit/s [x.x.x.8 ] 29599218b 40647p avg 728b 1.07%b 1.23%p 35090 Kbit/s [y.y.y.122 ] 52015361b 52743p avg 986b 1.89%b 1.60%p 61666 Kbit/s [y.y.y.116 ] 52161788b 55435p avg 940b 1.89%b 1.68%p 61839 Kbit/s [y.y.y.114 ] 55409677b 56945p avg 973b 2.01%b 1.73%p 65690 Kbit/s [y.y.y.120 ] 59971853b 59782p avg 1003b 2.18%b 1.81%p 71098 Kbit/s [y.y.y.126 ] 60821991b 65184p avg 933b 2.21%b 1.98%p 72106 Kbit/s [y.y.y.117 ] 61811624b 58374p avg 1058b 2.24%b 1.77%p 73279 Kbit/s [y.y.y.113 ] 62492070b 63001p avg 991b 2.27%b 1.91%p 74086 Kbit/s [y.y.y.119 ] 63128246b 63545p avg 993b 2.29%b 1.93%p 74840 Kbit/s [y.y.y.121 ] 64392950b 66418p avg 969b 2.34%b 2.01%p 76340 Kbit/s [y.y.y.115 ] 65723751b 64100p avg 1025b 2.39%b 1.94%p 77917 Kbit/s [y.y.y.124 ] 66646572b 62637p avg 1064b 2.42%b 1.90%p 79011 Kbit/s [y.y.y.123 ] 70332553b 68284p avg 1030b 2.55%b 2.07%p 83381 Kbit/s [y.y.y.125 ] 70545386b 67441p avg 1046b 2.56%b 2.04%p 83634 Kbit/s [y.y.y.118 ] 71393238b 69490p avg 1027b 2.59%b 2.11%p 84639 Kbit/s [x.x.x.6 ] 123028709b 137530p avg 894b 4.47%b 4.17%p 145855 Kbit/s [x.x.x.4 ] 124816100b 137221p avg 909b 4.53%b 4.16%p 147974 Kbit/s [x.x.x.7 ] 126130939b 143443p avg 879b 4.58%b 4.35%p 149532 Kbit/s [x.x.x.3 ] 128316371b 139360p avg 920b 4.66%b 4.22%p 152123 Kbit/s [x.x.x.0 ] 132445418b 143143p avg 925b 4.81%b 4.34%p 157018 Kbit/s [x.x.x.1 ] 133197094b 143713p avg 926b 4.84%b 4.35%p 157910 Kbit/s [x.x.x.2 ] 135346483b 146510p avg 923b 4.91%b 4.44%p 160458 Kbit/s [x.x.x.5 ] 135366769b 147766p avg 916b 4.92%b 4.48%p 160482 Kbit/s Average packet size 834 (with ethernet header, max avg sz 1514) Time 6748, total bytes 2753819139, total speed 3188235 Kbit/s As you can see max single ip takes is 4.48% of bandwidth. Also i cannot waste ipv4 for larger pools, just because of some deadly flawed equipment/configuration.
On 20 December 2017 at 20:34, Denys Fedoryshchenko <denys@visp.net.lb> wrote:
As you can see max single ip takes is 4.48% of bandwidth. Also i cannot waste ipv4 for larger pools, just because of some deadly flawed equipment/configuration.
This indeed sounds unacceptable. I would suspect intentional per-prefix policing, intended for broadband subscriber interfaces. -- ++ytti
participants (3)
-
Blake Hudson
-
Denys Fedoryshchenko
-
Saku Ytti