My understanding has always been that 30ms was set based on human perceptibility. 30ms was the average point at which the average person could start to detect artifacts in the audio. On Tue, Sep 19, 2023 at 8:13 PM Dave Taht <dave.taht@gmail.com> wrote:
Dear nanog-ers:
I go back many, many years as to baseline numbers for managing voip networks, including things like CISCO LLQ, diffserv, fqm prioritizing vlans, and running voip networks entirely separately... I worked on codecs, such as oslec, and early sip stacks, but that was over 20 years ago.
The thing is, I have been unable to find much research (as yet) as to why my number exists. Over here I am taking a poll as to what number is most correct (10ms, 30ms, 100ms, 200ms),
https://www.linkedin.com/feed/update/urn:li:ugcPost:7110029608753713152/
but I am even more interested in finding cites to support various viewpoints, including mine, and learning how slas are met to deliver it.
-- Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html Dave Täht CSO, LibreQos