If one installs smokeping on a raspberry pi using a wired ethernet interface to a home router, on a DOCSIS3 residential last mile segment, and copies over a well chosen targets file for things to test, and sets it to a 60s interval, all other settings at default... It's quite rare to find a network segment that isn't anywhere from 0.05 to 0.30% packet loss (or sometimes worse!) to its default gateway over a 24 hour period. Also very informative is when you see spikes in latency and jitter during evening peak usage hours. Lots of very basic test methodology can reveal the nature of oversubscribed contended access mediums. Last-mile 5 GHz band PtMP WISPs can see exactly the same issues on an overloaded AP. On Mon, May 31, 2021 at 12:09 PM Denys Fedoryshchenko < nuclearcat@nuclearcat.com> wrote:
It can't be zero. In 1000BaseT specs, BER, 1 in 1*10^10 bits error is considered acceptable on each link. So it should be defined same way, as acceptable BER. And until which point? How to measure? Same for bandwidth, port rate can be 1Gbit, ISP speedtest too, but most websites 100Kbit.
On 2021-05-31 21:28, Fred Baker wrote:
I would add packet loss rate. Should be zero, and if it isn’t, it points to an underlying problem.
Sent from my iPad
On May 31, 2021, at 11:01 AM, Josh Luthman <josh@imaginenetworksllc.com> wrote:
I think the latency and bps is going to be the best way to measure broadband everyone can agree on. Is there a better way, sure, but how can you quantify it?
Josh Luthman 24/7 Help Desk: 937-552-2340 Direct: 937-552-2343 1100 Wayne St Suite 1337 Troy, OH 45373
On Sun, May 30, 2021 at 7:16 AM Mike Hammett <nanog@ics-il.net> wrote:
I think that just underscores that the bps of a connection isn't the end-all, be-all of connection quality. Yes, I'm sure most of us here knew that. However, many of us here still get distracted by the bps.
If we can't get it right, how can we expect policy wonks to get it right?
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
-------------------------
From: "Sean Donelan" <sean@donelan.com> To: "NANOG" <nanog@nanog.org> Sent: Saturday, May 29, 2021 6:25:12 PM Subject: Call for academic researchers (Re: New minimum speed for US broadband connections)
I thought in the 1990s, we had moved beyond using average bps measurements for IP congestion collapse. During the peering battles, some ISPs used to claim average bps measurements showed no problems. But in reality there were massive packet drops, re-transmits and congestive collapse which didn't show up in simple average bps graphs.
Have any academic researchers done work on what are the real-world minimum connection requirements for home-schooling, video teams applications, job interview video calls, and network background application noise?
During the last year, I've been providing volunteer pandemic home schooling support for a few primary school teachers in a couple of
different states. Its been tough for pupils on lifeline service (fixed or mobile), and some pupils were never reached. I found lifeline students on mobile (i.e. 3G speeds) had trouble using even audio-only group calls, and the exam proctoring apps often didn't work at all forcing those students to fail exams unnecessarily.
In my experience, anecdotal data need some academic researchers, pupils with at least 5 mbps (real-world measurement) upstream connections at home didn't seem to have those problems, even though the average bps graph was less than 1 mbps.