One of the things to consider is that geostationary satellite operators operate based entirely on the economics of oversubscription. If you were to purchase a full duplex 1 Mbps x 1 Mbps connection via VSAT terminal in North America (whether C, Ku or Ka-band) you'd be looking at $2000/month or more. You'd have a fixed FDD pipe at 495ms latency end to end. 32:1 oversubscription or more is normal. It costs close to $150 million to build and launch a 5000 kilogram satellite into geostationary orbit before you build any teleport infrastructure. The entire satellite has far less aggregate data throughput capacity than two strands of singlemode. Modern Ka-band satellites as used by consumer grade VSAT services in the United States use dozens of individual spot beams. The FDD capacity in each spot beam may be exhausted or significantly oversubscribed in one geographical region, yet relatively unused in others. Compare, for example, real world user reported speeds at 10pm on Exede service in western WA state vs. somewhere in a very rural part of Wyoming. Spot beam TDMA contention ratios are carefully managed by satellite operators - they're very much aware of the issue you describe, and do their best to mitigate it. Extensive massaging of TDMA parameters in spot beams is the only way that it's economical to offer service for between $75 to $150/month even with a 2 or 3 year contract. There are a number of physics, OSI layer 1 and 2 issues to consider with satellite before discussing anything TCP related. On Tue, Apr 19, 2016 at 6:29 PM, Jean-Francois Mezei < jfmezei_nanog@vaxination.ca> wrote:
As part of the ongoing CRTC hearings, the incumbents' claim that continued implementation of the current 5/1 standard would make Canada a world leader for broadband in the future.
A satellite company who currently can't even deliver its advertised 5/1 now brags its next satellite will deliver 25/1.
So I have a few questions:
Considering a single download TCP connection. I am aware that modern TCP stacks will rationalize ACKs and send 1 ACK for every x packets received, thus reducing upload bandwidth requirements. Is this basically widespread and assumed that everyone has that ?
Also, as you split available bandwidth between multiple streams, won't ack upload requirements increase because ACK rationalisation happens far less often sicne each TCP connection has its own context for ACKs?
When one considers the added latency of satellite links, does 25/1 make sense ? (I need a sanity check to distinguish between marketing spin presented to the regulator and real life)
I noticed that in the USA, EXEDE Satellite advertises 12/3 plans and they are also on a VIA Sat satellite, presumably the same vehicle that Xplornet tries to deliver its measly 5/1 on. Would all beams be identical on a satellite or can they be configured differently with a ISP adjustable rate of upload/download inside the same spectrum ?
Also, when you establish a TCP connection, do most stacks have a default window size that gives the sender enough "patience" to wait long enough for the ACK ?
If sender sends packet 457, doesn't get ACK in time and resends 457, doesn't that also result in reduction in window size (the very opposite of what would be needed in high latency links) ?
And when the first ACK finally arrives, won't the sender assume this ACK was for the resent 457 ? Or is satellite latency low enough that stacks all have enough default "patience" to wait for ACKs and this is a non issue ?
(Note Xplornet refused to answer questions on whether they operate special proxies at their gound stations to manage TCP connections to appear "close").
What i am trying to get at here is whether 25/1 on satellite, in real life with a few apps exchanging data, would actually be able to make use of the 25 download speed or whether the limited 1mbps upload would choke the downloads ?