Hi Mark, Just comments on your points below. On 13.08.2020 12:31, Mark Tinka wrote:
On 13/Aug/20 12:23, Olav Kvittem via NANOG wrote:
Wouldn't it be better to measure the basic performance like packet drop rates and queue sizes ?
These days live video is needed and these parameters are essential to the quality.
Queues are building up in milliseconds and people are averaging over minutes to estimate quality.
If you are measuring queue delay with high frequent one-way-delay measurements
you would then be able to advice better on what the consequences of a highly loaded link are.
We are running a research project on end-to-end quality and the enclosed image is yesterdays report on
queuesize(h_ddelay) in ms. It shows stats on delays between some peers.
I would have looked at the trends on the involved links to see if upgrade is necessary -
421 ms might be too much ig it happens often.
I'm confident everyone (even the cheapest CFO) knows the consequences of congesting a link and choosing not to upgrade it.
Optical issues, dirty patch cords, faulty line cards, wrong configurations, will almost likely lead to packet loss. Link congestion due to insufficient bandwidth will most certainly lead to packet loss.
sure, but I guess the loss rate depends of the nature of the traffic.
It's great to monitor packet loss, latency, pps, e.t.c. But packet loss at 10% link utilization is not a foreign occurrence. No amount of bandwidth upgrades will fix that.
I guess that having more reports would support the judgements better. A basic question is : what is the effect on the perceived quality of the customers ? And the relation between that and /5min load is not known to me. Actually one good indicator of the congestion loss rate are of course the SNMP OutputDiscards. Curves for queueing delay, link load and discard rate are surprisingly different. regards Olav
Mark.