In message <014d01c1bf3b$44bfca00$ea9a8d18@evilinc>, "Tim Devries" writes:
Hello,
I think this question may have been asked before, but what is the minimum latency and delay I can expect from a satellite connection? What kind of delay have others seen in a working situation? What factors should be considered in end to end connectivity architecture when utilizing a satellite link?
Any help appreciated,
Geosynchronous orbit is about 36,000 km from the center of the earth. Round-trip to the satellite is ~72,000 km; the speed of light is 300,000 km/sec. That works out to 240 milliseconds at the minimum for one-way packet delivery. --Steve Bellovin, http://www.research.att.com/~smb Full text of "Firewalls" book now at http://www.wilyhacker.com
In a message written on Tue, Feb 26, 2002 at 09:07:14PM -0500, Steven M. Bellovin wrote:
Geosynchronous orbit is about 36,000 km from the center of the earth. Round-trip to the satellite is ~72,000 km; the speed of light is 300,000 km/sec. That works out to 240 milliseconds at the minimum for one-way packet delivery.
Remember that a geosynchronous satellte must orbit the equator. Let's say for the sake of argument it's over mexico, you're in New York, and the downlink station is in San Diego. The 36,000 is the distance straight "down" to mexico, It's probably more like 50,000 to New York, and 45,000 to San Diego. And if you're in New York, and your mail server is in New York, but the downlink was to San Diego, you've got another 4,000 across country. Now you're up closer to 100,000km. Add to this some inefficient encoding done on satellites, and most (consumer) systems using a broadcast medium that can buffer packets and you see why people report 1 second RTT's with services like StarBand. It's better than nothing, but it's a rough primary connection. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
Remember that a geosynchronous satellte must orbit the equator. Let's say for the sake of argument it's over mexico, you're in New York, and the downlink station is in San Diego. The 36,000 is the distance straight "down" to mexico, It's probably more like 50,000 to New York, and 45,000 to San Diego.
The radius of the earth is about 6400km. Geostationary orbit is, as you note, 36000km above the equator. The path from the satellite to the North Pole is the hypotenuse of a right triangle with legs of 6400km and (6400+36000)km. That gives a distance from the North Pole to the satellite of 43000km. It's reasonable to conclude that the distance from either New York or San Diego is less than that. -- Brett
----- Original Message ----- From: "Leo Bicknell" <bicknell@ufp.org> To: "Steven M. Bellovin" <smb@research.att.com> Cc: "Tim Devries" <zsolutions@cogeco.ca>; <nanog@merit.edu> Sent: Tuesday, February 26, 2002 5:27 PM Subject: Re: Satellite latency
Remember that a geosynchronous satellte must orbit the equator. Let's say for the sake of argument it's over mexico, you're in New York, and the downlink station is in San Diego. The 36,000 is the distance straight "down" to mexico, It's probably more like 50,000 to New York, and 45,000 to San Diego. And if you're in New York, and your mail server is in New York, but the downlink was to San Diego, you've got another 4,000 across country. Now you're up closer to 100,000km.<<
Not wanting to get picky about ~20,000 km., but the maximum -usable- slant path is ~41,000 km. --Michael
In a message written on Tue, Feb 26, 2002 at 09:07:14PM -0500, Steven M.
Bellovin wrote:
Geosynchronous orbit is about 36,000 km from the center of the earth. Round-trip to the satellite is ~72,000 km; the speed of light is 300,000 km/sec. That works out to 240 milliseconds at the minimum for one-way packet delivery.
Remember that a geosynchronous satellte must orbit the equator. Let's say for the sake of argument it's over mexico, you're in New York, and the downlink station is in San Diego. The 36,000 is the distance straight "down" to mexico, It's probably more like 50,000 to New York, and 45,000 to San Diego. And if you're in New York, and your mail server is in New York, but the downlink was to San Diego, you've got another 4,000 across country. Now you're up closer to 100,000km.
Add to this some inefficient encoding done on satellites, and most (consumer) systems using a broadcast medium that can buffer packets and you see why people report 1 second RTT's with services like StarBand.
It's better than nothing, but it's a rough primary connection.
-- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
On Tue, Feb 26, 2002 at 09:07:14PM -0500, Steven M. Bellovin wrote:
In message <014d01c1bf3b$44bfca00$ea9a8d18@evilinc>, "Tim Devries" writes:
I think this question may have been asked before, but what is the minimum latency and delay I can expect from a satellite connection? What kind of delay have others seen in a working situation? What factors should be considered in end to end connectivity architecture when utilizing a satellite link?
Geosynchronous orbit is about 36,000 km from the center of the earth. Round-trip to the satellite is ~72,000 km; the speed of light is 300,000 km/sec. That works out to 240 milliseconds at the minimum for one-way packet delivery.
in my experience, the "normal" latency is somewhere around 600-800 ms (ping times). however, you should also be aware of issues related to TCP window sizes. due to the windowing mechanism between the sending system (say a web server in a farm connected with multiple OC192 connections) and the receiving system (say a PC connected to a broadband infrastructure like @home), an individual TCP session (like SMTP) will reach a limit of throughput which means that even with a DS3 satelite connection, an individual TCP session won't see more than something like 600Kbps throughput (i forget the actual number) if there is a satelite connection somewhere in the middle. one time a client was complaining that they couldn't do 2Mbit ftp's on their E1 satelite connection. they were under the impression that the link itself was being throttled. in order to demonstrate that the link itself could do 2mbit, i set up 10 or 20 concurrent ssh scp's, and then showed that the interface was doing an aggregate of 2mbit (well, a bit shy of that). if you adjust the window size on the sending and receiving systems, you can improve this, but this solution is impractical, as you would need to get everyone on the internet (or at least all of the webservers and websurfers you are servicing) to make adjustments to their local TCP stack. there are 3rd party solutions which can improve the throughput, but even with those, there are still speed of light issues which will cause individual throughput limitations. -- [ Jim Mercer jim@reptiles.org +1 416 410-5633 ] [ I want to live forever, or die trying. ]
On Wed, 27 Feb 2002 09:17:37 EST, Jim Mercer said:
if you adjust the window size on the sending and receiving systems, you can improve this, but this solution is impractical, as you would need to get everyone on the internet (or at least all of the webservers and websurfers you are servicing) to make adjustments to their local TCP stack.
there are 3rd party solutions which can improve the throughput, but even with those, there are still speed of light issues which will cause individual throughput limitations.
RFC1323 support is a 3rd party solution, or does it not solve all the problem here? -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech
On Wed, Feb 27, 2002 at 09:59:36AM -0500, Valdis.Kletnieks@vt.edu wrote:
On Wed, 27 Feb 2002 09:17:37 EST, Jim Mercer said:
if you adjust the window size on the sending and receiving systems, you can improve this, but this solution is impractical, as you would need to get everyone on the internet (or at least all of the webservers and websurfers you are servicing) to make adjustments to their local TCP stack.
there are 3rd party solutions which can improve the throughput, but even with those, there are still speed of light issues which will cause individual throughput limitations.
RFC1323 support is a 3rd party solution, or does it not solve all the problem here?
its been a while since i looked at it, but i seem to recall there was a lack of implementation/adhereance to that RFC in windows TCP stacks. i think for RFC1323 to be effective, it needs to be working on the sending and receiving systems, not just the intermediary routers. -- [ Jim Mercer jim@reptiles.org +1 416 410-5633 ] [ I want to live forever, or die trying. ]
On Wednesday, February 27, 2002, at 10:18 , Jim Mercer wrote:
its been a while since i looked at it, but i seem to recall there was a lack of implementation/adhereance to that RFC in windows TCP stacks.
I don't think that has been the case for a while, now.
i think for RFC1323 to be effective, it needs to be working on the sending and receiving systems, not just the intermediary routers.
RFC1323 can only be supported on TCP endpoints, so there's nothing you can or should do on intermediary routers. There are good descriptions of general satellite transmission characteristics for IP together with a recipe book of mechanisms which can improve TCP performance in RFC2488. RFC2760 may also be interesting. Joe
Jim Mercer wrote:
if you adjust the window size on the sending and receiving systems, you can improve this, but this solution is impractical, as you would need to get everyone on the internet (or at least all of the webservers and websurfers you are servicing) to make adjustments to their local TCP stack.
The receiver is the one that informs the sender how large of a window it can accept, so it can be practical for a subscriber installation. It wouldn't be a good idea to park a bunch of servers behind one of these links, but any receiving node that set its TCP receive window to 2x the byte/sec capacity of the link should see decent throughput. Tony
participants (8)
-
Brett Frankenberger
-
Jim Mercer
-
Joe Abley
-
Leo Bicknell
-
Michael Painter
-
Steven M. Bellovin
-
Tony Hain
-
Valdis.Kletnieks@vt.edu