Adjusting TCP windows on production systems?
Is there anyone in a production environment who, as part of their system build process, adjusts the TCP receive window/MSS/etc. on production systems? I'm dealing with a few latency issues and the MSS settings improve them, but I'm hesitant to suggest it unless there's something I can point to. -Dave
We do on some systems that do bulk data transfer over links with latency (latency being 70 ms cross country). Temkin, David wrote:
Is there anyone in a production environment who, as part of their system build process, adjusts the TCP receive window/MSS/etc. on production systems?
I'm dealing with a few latency issues and the MSS settings improve them, but I'm hesitant to suggest it unless there's something I can point to.
-Dave
Hi, Dave. ] Is there anyone in a production environment who, as part of their system ] build process, adjusts the TCP receive window/MSS/etc. on production ] systems? Increasing it helps, particularly if both ends have the same setting. Don't forget to enable both RFC1323 and RFC2018 support if you go beyond the 64K mark. Take a look here for some (slightly dated *sigh*) suggestions along these lines. <http://www.cymru.com/Documents/ip-stack-tuning.html> As with anything, don't simply "crank it up to 11" because it can be done. Plan, tune, measure, repeat. :) Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
On Tue, 30 Sep 2003 15:44:03 -0400 "Temkin, David" <temkin@sig.com> wrote:
Is there anyone in a production environment who, as part of their system build process, adjusts the TCP receive window/MSS/etc. on production systems?
Look at http://www.internet2.edu/~shalunov/writing/tcp-perf.html
I'm dealing with a few latency issues and the MSS settings improve them, but I'm hesitant to suggest it unless there's something I can point to.
-Dave
Marshall
On Tue, 30 Sep 2003 15:44:03 -0400 "Temkin, David" <temkin@sig.com> wrote:
Is there anyone in a production environment who, as part of their system build process, adjusts the TCP receive window/MSS/etc. on production systems?
As a concrete data point:
Marshall Eubanks wrote: the tuning below on Solaris 9 increased the TCP data transfer rates from SJC to NYC by a factor of 10: (i.e. from 2 to about 20Mbps with ftp; 1 to 11Mbps with scp.) ndd -set /dev/tcp tcp_recv_hiwat 400000 ndd -set /dev/tcp tcp_xmit_hiwat 400000 ndd -set /dev/tcp tcp_wscale_always 1 ndd -set /dev/tcp tcp_max_buf 10485760
On Tue, 30 Sep 2003 15:44:03 -0400 "Temkin, David" <temkin@sig.com> wrote:
Is there anyone in a production environment who, as part of their system build process, adjusts the TCP receive window/MSS/etc. on production systems?
As a concrete data point:
Just as a head up - this sort of below should not be done on things like web servers that support lots of concurrent connections - you'll eat all your memory for sockets. Marshall Eubanks wrote: the tuning below on Solaris 9 increased the TCP data transfer rates from SJC to NYC by a factor of 10: (i.e. from 2 to about 20Mbps with ftp; 1 to 11Mbps with scp.) ndd -set /dev/tcp tcp_recv_hiwat 400000 ndd -set /dev/tcp tcp_xmit_hiwat 400000 ndd -set /dev/tcp tcp_wscale_always 1 ndd -set /dev/tcp tcp_max_buf 10485760
Steve Francis wrote:
Just as a head up - this sort of below should not be done on things like web servers that support lots of concurrent connections - you'll eat all your memory for sockets.
Unless you have something like FreeBSD´s auto-tuning inflight window stuff which would allow large windows to be allocated only when the delay*bandwidth product justifies it. Pete
Marshall Eubanks wrote:
On Tue, 30 Sep 2003 15:44:03 -0400 "Temkin, David" <temkin@sig.com> wrote:
Is there anyone in a production environment who, as part of their system build process, adjusts the TCP receive window/MSS/etc. on production systems?
As a concrete data point: the tuning below on Solaris 9 increased the TCP data transfer rates from SJC to NYC by a factor of 10: (i.e. from 2 to about 20Mbps with ftp; 1 to 11Mbps with scp.)
ndd -set /dev/tcp tcp_recv_hiwat 400000 ndd -set /dev/tcp tcp_xmit_hiwat 400000 ndd -set /dev/tcp tcp_wscale_always 1 ndd -set /dev/tcp tcp_max_buf 10485760
participants (5)
-
Marshall Eubanks
-
Petri Helenius
-
Rob Thomas
-
Steve Francis
-
Temkin, David