An interesting question would be to quantify and do statistical analysis on the following:

Take a set of 1000 or more residential last mile broadband customers on an effectively more-than-they-can-use connection (symmetric 1Gbps active ethernet or similar).

On a 60s interval, retrieve SNMP traffic stats from the interfaces towards the customers' demarcs, or directly from on premises CPEs.

Store that data in influxdb or another lossless time series database for a multi month period.

Anonymize the data so that no possible information about the identity/circuit ID/location of the customer can be identified. Perhaps other than "gigE customer somewhere in North America", representing a semi random choice of US/Canada domestic market residential broadband users.

Provide that data set to persons who wish to analyze it to see how much/how bursty the traffic really is, night/day traffic patterns, remote work traffic patterns during office hours in certain time zones, etc. Additionally quantify what percentage of users move how much upstream data or come anywhere near maxing it out in brief bursts (people doing full disk offsite backups of 8TB HDDs to Backblaze, uploading 4K videos to youtube, etc).

I at first thought of a concept of doing something similar but with netflow data on a per CPE basis, but that has a great deal more worrisome privacy and PII data implications than simply raw bps/s interface data. Presumably netflow (or data from Kentik, etc) for various CDN traffic and other per-AS downstream traffic headed to an aggregation router that serves exclusively a block of a few thousand downstream residential symmetric gigabit customers would not be a difficult task to sufficiently anonymize.




On Sat, May 29, 2021 at 4:25 PM Sean Donelan <sean@donelan.com> wrote:

I thought in the 1990s, we had moved beyond using average bps measurements
for IP congestion collapse.  During the peering battles, some ISPs used to
claim average bps measurements showed no problems.  But in reality there
were massive packet drops, re-transmits and congestive collapse which
didn't show up in simple average bps graphs.


Have any academic researchers done work on what are the real-world minimum
connection requirements for home-schooling, video teams applications, job
interview video calls, and network background application noise?


During the last year, I've been providing volunteer pandemic home
schooling support for a few primary school teachers in a couple of
different states.  Its been tough for pupils on lifeline service (fixed
or mobile), and some pupils were never reached. I found lifeline students
on mobile (i.e. 3G speeds) had trouble using even audio-only group calls,
and the exam proctoring apps often didn't work at all forcing those
students to fail exams unnecessarily.

In my experience, anecdotal data need some academic researchers, pupils
with at least 5 mbps (real-world measurement) upstream connections at
home didn't seem to have those problems, even though the average bps graph
was less than 1 mbps.