On Wed, Aug 10, 2022 at 10:38 PM William Herrin <bill@herrin.us> wrote:
On Wed, Aug 10, 2022 at 3:29 PM Christopher Wolff <chris@vergeinternet.com> wrote:
Will tomorrow’s applications require a re-thinking of “The Internet” and protocols that are low latency compliant?
No, because speed of light constraints will continue to cause us to implement the latency-critical components close to the user. It's basic physics man.
Also, because error IS the character of an operational network. All successful network protocols deal reasonably with unpredictable error. Error correction begets jitter which is a form of latency. It's a basic tenet of any network-using device no matter what protocol you design. Hence no such thing as a "low latency compliant" network or protocol. You can make a stochastic statement about the probability that information arrives within a timeframe but you absolutely cannot guarantee it. What CAN exist is protocols which don't do "head of line blocking" during error correction. That's where data successfully received isn't delivered until after the corrected data preceding it arrives. But we already have those. Most things UDP went UDP instead of TCP to avoid TCP's head of line blocking. Regards, Bill Herrin -- For hire. https://bill.herrin.us/resume/