On Dec 25, 2020, at 5:32 PM, John Levine <johnl@iecc.com> wrote:
I agree it is odd to make 100/100 the top speed. The fiber service I have from my local non-Bell telco offers 100/100, 500/500, and 1000/1000. FiOS where you can get it goes to 940/880.
The obvious guess is that their upstream bandwidth is underprovisioned, or maybe they figure 100/100 is all they need to compete in that particular market.
My TV (wired) pulls at higher bitrates when doing the initial fetches of the buffering. Not unusual to see it pulling more than 150Mb/s at the start of a (non-4K) show. I think the extent that end-users are impacted by these slower speeds while buffering is under appreciated in the experience. At $dayjob many servers are 10G or 100G so the limiting factor is most likely the CPE or ISP. I was hearing last night about someone with a device that didn’t appear to be hitting the line-rate but was dropping 0.5% of packets when running at 3Gb/s until they upgraded to one of the major networking vendors we all know here. In my small FTTH network the slowest link is at the customer home and all the devices are hardware ASIC forwarded vs offload as you find in some of the low/mid-tier devices (eg: Tik/UBNT). Many streaming things do 8 second waits between chunks, so if you’re pulling a video stream at 6Mb/s you really are pulling 6*8 (lets say 50) then idle for 7 seconds. If you’re on a 25Mb/s service or even a 50Mb/s service it won’t work the way you expect if there’s any other activity. - Jared