On Sun, Aug 7, 2022 at 11:24 PM Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
sronan@ronan-online.com wrote:
There are MANY real world use cases which require high throughput at 64 byte packet size.
Certainly, there were imaginary world use cases which require to guarantee so high throughput of 64kbps with 48B payload size for which 20(40)B IP header was obviously painful and 5B header was used. At that time, poor fair queuing was assumed, which requires small packet size for short delay.
But as fair queuing does not scale at all, they disappeared long ago.
What do you mean by FQ, exactly? "5 tuple FQ" is scaling today on shaping middleboxes like preseem and LibreQos to over 10gbits. ISP reported results of customer calls about speed simply vanish. Admittedly the AQM is dropping or marking some .x% of packets, but tests with fq with short buffers vs aqm alone showed the former the clear winner, and fq+aqm took it in for the score. On linux fq_codel is the near universal default, also. The linux tcp stack does fq+pacing at nearly 100gbits today on "BIG" tcp. "disappeared", no. invisible, possible. transitioning from +10 gbit down to 1gbit or less, really, really, useful. IMHO, and desparately needed, in way more places. Lastly VOCs and LAG and switch fabrics essentially FQ ports. In the context of aggregating up to 400Gbit from that, you are FQing also. Now fq-ing inline against the ip headers at 400gbit appeared impossible until this convo when the depth of the pipeline and hardware hashing was discussed, but I'll settle for more rfc7567 behavior just in stepping down from that, to 100, and from 100 stepping down, adding in fq+aqm.
Denying those use cases because they don’t fit your world view is short sighted.
That could have been a valid argument 20 years ago.
Masataka Ohta
-- FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ Dave Täht CEO, TekLibre, LLC