On 16 January 2017 at 14:36, Tore Anderson <tore@fud.no> wrote:
But here you're talking about the RTT of each individual link, right, not the RTT of the entire path through the Internet for any given flow?
I'm talking about RTT of end-to-end, which will determine window-size, which will determine burst-size. Your worst burst will be half of needed window size, and you need to be able to ingest this burst at sender rate, regardless of receiver rate.
Put it another way, my «Internet facing» interfaces are typically 10GEs with a few (kilo)metres of dark fibre that x-connects into my IP-transit providers' routers sitting in nearby rooms or racks (worst case somewhere else in the same metro area). Is there any reason why I should need deep buffers on those interfaces?
Imagine content network having 40Gbps connection, and client having 10Gbps connection, and network between them is lossless and has RTT of 200ms. To achieve 10Gbps rate receiver needs 10Gbps*200ms = 250MB window, in worst case 125MB window could grow into 250MB window, and sender could send the 125MB at 40Gbps burst. This means the port receiver is attached to, needs to store the 125MB, as it's only serialising it at 10Gbps. If it cannot store it, window will shrink and receiver cannot get 10Gbps. This is quite pathological example, but you can try with much less pathological numbers, remembering TridentII has 12MB of buffers. -- ++ytti