On Thu, 28 May 2015 06:45:44 +0530, Glen Kent said:
If i see an RTT of 150ms and packet loss of 0.01% between points A and B and the maximum throughput then between these as, say 250Mbps. Then can i say that i will *always* get the same (or in a close ballpark) throughput not matter what time of the day i run these tests.
Only if you control the network load across the entire path. As a simplified example, assume you did your test at 2AM and there's no other activity, there's a bottleneck 1Gbps link in the path, and you get 250Mbps. (yes, that result indicates probable misconfig, but bear with me.. :) You test again at 11AM, and now there's 7 other streams trying to pump 250Mbps across that link. All 8 should probably drop back to 125Mbps. For extra credit, factor in bufferbloat pushing your RTT through the roof under congestion, and similar misbehaviors.... Oh, and that 0.01% packet loss is going to play heck with tcp slow-start and opening the window - to quote RFC3649: This document proposes HighSpeed TCP, a modification to TCP's congestion control mechanism for use with TCP connections with large congestion windows. In a steady-state environment, with a packet loss rate p, the current Standard TCP's average congestion window is roughly 1.2/sqrt(p) segments. This places a serious constraint on the congestion windows that can be achieved by TCP in realistic environments. For example, for a Standard TCP connection with 1500- byte packets and a 100 ms round-trip time, achieving a steady-state throughput of 10 Gbps would require an average congestion window of 83,333 segments, and a packet drop rate of at most one congestion event every 5,000,000,000 packets (or equivalently, at most one congestion event every 1 2/3 hours). The average packet drop rate of at most 2*10^(-10) needed for full link utilization in this environment corresponds to a bit error rate of at most 2*10^(-14), and this is an unrealistic requirement for current networks.