On Wed, 8 Jul 2020 at 14:56, Adam Thompson <athompson@merlin.mb.ca> wrote:
If jitter were defined anywhere vis-à-vis LACP, it would be _de jure_, not _de facto_ as I said.
I suspect the de-facto domain you think of has modest population. As jitter would only matter in case where protocol measures delay and artificially adds static delay to compensate. This is not the case for LACP (some balancing solutions do latency compensation), jitter is immaterial.
Yes, if you have *guaranteed* that TCP sessions hash uniquely to a single link in your network, you might be able to successfully tunnel LACP (or EtherChannel, or any other L1 link-bonding technique). The last time I attempted to do this on my network, I discovered that guarantee wasn't nearly as ironclad as I expected. I don't remember the gory details, at this remove, sorry. Maybe it wasn't TCP? Maybe it wasn't the default hashing algorithm? Dunno.
Jitter on software device connected directly has order of magnitude higher jitter than operator pseudowire across globe, so adding tunnel or not adding tunnel is not at all indicative of amount of jitter, which still is not a metric that LACP cares about. Internet works, because hashing works, it's not perfect, but it's good enough that in practical Internet most links you traverse are relying on hash to work, be it ECMP or LAG. -- ++ytti