i have (3) oca's ... 2 connected at 100g each, and 1 at dual 100g lag... with an operational throughput capacity of the nodes being something less than that, i forget the exact node(s) throughput specs, but anyway... about the 11/15/2024 Tyson/Paul Netflix fights.... from 6 - 7 p.m. central time i saw extreme ramp up on my OCA utilization...reaching an all-time high - 15g - 27g - 50g = 92g at 7:31 p.m. i saw what equated to a ~40g dive, total, across all 3 of my oca caches - 10g - 17g - 27g = 54g I never saw the utilization ramp up to the same level again after that. actually the first one did get back to 16g, but the other 2 never ramped up that much again I was waiting for the main event (Paul/Tyson) to generate an even higher load than originally seen at the 7 p.m. but i didn't happen The aforementioned graph ramp up seen from 6-7 p.m.was a clean scaling graph, as you would expect as more and more eyeballs were "tuning in".... After the sharp drop at 7:31 p.m. the graphs never really cleaned up after that. The graphs were just down and up. - 7:31 p.m. - sharp sag/drop - 7:51 p.m. - sharp sag/drop - 8:18 p.m. - sharp sag/drop - 9:04 p.m. - sharp sag/drop - 9:53 p.m. - ramp up - 10:08 p.m. - aggressive ramp down I wonder if the overall nationwide/worldwide issues affected even my local caches. I figured my local caches would have been "protected" or unaffected by issues outside of my network, but I'm not so sure about it I can say, that we didn't have a ton of customer complaints from our 60k resi bb subs, but I did hear about some customer complaints, but I don't think it was many I wonder if there was some sort of adaptive rate changes in the streams, altering the overall raw bandwidth utilization I observed, causing the main event to not be seen as high of a peak on the graph, or if it was just the Netflix was having issues everywhere. I don't know. Hopefully Netflix NFL Christmas Day is much better Aaron
On Nov 18, 2024, at 1:44 PM, Livingood, Jason via NANOG <nanog@nanog.org> wrote:
Something that would be interesting to see (particularly if someone has eyes in Comcast’s network) is to see how customers in areas where L4S trials are happening faired in comparison to others.
The sample area of the deployment is still to small from which to draw conclusions (~20K homes). We’ll know in a few weeks more how things look in comparison. But in this example, I think the bottleneck was more likely on the server/CDN side of things, so CPE and last mile AQM and/or dual queue L4S would probably not have made a difference. But never know without knowing full root cause. I have no doubt the Netflix folks will sort it – they’ve got some very smart transport layer and CDN folks.
JL