On 03Jan21, Brandon Martin allegedly wrote:
On 1/3/21 3:11 PM, Jay R. Ashworth wrote:
Well, TCP means that the servers have to expect to have 100k's of open connections; I remember that used to be a problem.
Out of curiosity, has anyone investigated if it's possible to hold open a low-traffic, long-lived TCP session without actually storing state using techniques similar to syncookies and do so in a compatible manner?
Creating quiescent sockets has certainly been discussed in the context of RSS where you might want to server-notify a large number of long-held client connections very infrequently. While a kernel could quiesce a TCP socket down to maybe 100 bytes or so (endpoint tuples, sequence numbers, window sizes and a few other odds and sods), a big residual cost is application state - in particular TLS state. Even with a participating application, quiescing in-memory state to something less than, say, 1KB is probably hard but might be doable with a participating TLS library. If so, a million quiescent connections could conceivably be stashed in a coupla GB of memory. And of course if you're prepared to wear a disk read to recover quiescent state, your in-memory cost could be less than 100 bytes allowing many millions of quiescent connections per server. Having said all that, as far as I understand it, none of the DNS-over-TCP systems imply centralization, that's just how a few applications have chosen to deploy. We deploy DOH to a private self-managed server pool which consume a measly 10-20 concurrent TCP sessions. Mark.