The next broadband killer: advanced operating systems?
Windows Vista, and next week Mac OS X Leopard introduced a significant improvement to the TCP stack, Window Auto-Tuning. FreeBSD is committing TCP Socket Buffer Auto-Sizing in FreeBSD 7. I've also been told similar features are in the 2.6 Kernel used by several popular Linux distributions. Today a large number of consumer / web server combinations are limited to a 32k window size, which on a 60ms link across the country limits the speed of a single TCP connection to 533kbytes/sec, or 4.2Mbits/sec. Users with 6 and 8 MBps broadband connections can't even fill their pipe on a software download. With these improvements in both clients and servers soon these systems may auto-tune to fill 100Mbps (or larger) pipes. Related to our current discussion of bittorrent clients as much as they are "unfair" by trying to use the entire pipe, will these auto-tuning improvements create the same situation? -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
Interesting. I imainge this could have a large impact to the typical enterprise, where they might do large scale upgrades in a short period of time. Does anyone know if there are any plans by Microsoft to push this out as a Windows XP update as well? S Leo Bicknell wrote:
Windows Vista, and next week Mac OS X Leopard introduced a significant improvement to the TCP stack, Window Auto-Tuning. FreeBSD is committing TCP Socket Buffer Auto-Sizing in FreeBSD 7. I've also been told similar features are in the 2.6 Kernel used by several popular Linux distributions.
Today a large number of consumer / web server combinations are limited to a 32k window size, which on a 60ms link across the country limits the speed of a single TCP connection to 533kbytes/sec, or 4.2Mbits/sec. Users with 6 and 8 MBps broadband connections can't even fill their pipe on a software download.
With these improvements in both clients and servers soon these systems may auto-tune to fill 100Mbps (or larger) pipes. Related to our current discussion of bittorrent clients as much as they are "unfair" by trying to use the entire pipe, will these auto-tuning improvements create the same situation?
On Mon, 22 Oct 2007, Sam Stickland wrote:
Does anyone know if there are any plans by Microsoft to push this out as a Windows XP update as well?
You can achieve the same thing by running a utility such as TCP Optimizer. http://www.speedguide.net/downloads.php Turn on window scaling and increase the TCP window size to 1 meg or so, and you should be good to go. The "only" thing this changes for ISPs is that all of a sudden increasing the latency by 30-50ms by buffering in a router that has a link that is full, won't help much, end user machines will be able to cope with that and still use the bw. So if you want to make the gamers happy you might want to look into that WRED drop profile one more time with this in mind if you're in the habit of congesting your core regularily. -- Mikael Abrahamsson email: swmike@swm.pp.se
In a message written on Mon, Oct 22, 2007 at 06:42:48PM +0200, Mikael Abrahamsson wrote:
You can achieve the same thing by running a utility such as TCP Optimizer.
http://www.speedguide.net/downloads.php
Turn on window scaling and increase the TCP window size to 1 meg or so, and you should be good to go.
A bit of a warning, this is not exactly the same thing. When using the method listed above the system may buffer up to 1 Meg for each active TCP connection. Have 50 people connect to your web server via dialup and the kernel may eat up 50 Meg of memory trying to serve them. That's why the OS defaults have been so low for so long. The auto-tuning method I referenced dynamically changes the size of the window based on the free memory and the speed of the client allowing an individual client to get as big as it needs while insuring fairness. On a single user system with a single TCP connection they both do the same thing. On a very busy web server the first may make it fall over, the second should not. YMMV. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
Mikael Abrahamsson wrote:
On Mon, 22 Oct 2007, Sam Stickland wrote:
Does anyone know if there are any plans by Microsoft to push this out as a Windows XP update as well?
You can achieve the same thing by running a utility such as TCP Optimizer.
http://www.speedguide.net/downloads.php
Turn on window scaling and increase the TCP window size to 1 meg or so, and you should be good to go.
The "only" thing this changes for ISPs is that all of a sudden increasing the latency by 30-50ms by buffering in a router that has a link that is full, won't help much, end user machines will be able to cope with that and still use the bw. So if you want to make the gamers happy you might want to look into that WRED drop profile one more time with this in mind if you're in the habit of congesting your core regularily.
I've already hand adjusted the default TCP window size on my machine and it noticably made quite a big difference my transfer rates from my own tuned servers. From this little bit of evidence I can blazenly extrpolate to suggest that maximum bandwidth consumption is currently limited to some noticable degree by the lack of widely deployed TCP window size tuning. Links that are currently uncongested might suddenly see a sizable amount of extra traffic. I'm concerned that if Microsoft were to post this as a patch to Windows XP/2003 then we would see the effects of this "all at once", instead of the gradual process of Vista deployment. Anyone agree? Sam
On Tue, 23 Oct 2007, Sam Stickland wrote:
servers. From this little bit of evidence I can blazenly extrpolate to suggest that maximum bandwidth consumption is currently limited to some noticable degree by the lack of widely deployed TCP window size tuning. Links that are currently uncongested might suddenly see a sizable amount of extra traffic.
So, do we think that traffic will have a higher peak due to this (more traffic at peak time compared to low time), or that people will actually transfer more data because they get higher thruput? I don't see it as natural that people will transfer more data totally because they get higher thruput. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Tue, Oct 23, 2007, Sam Stickland wrote:
I'm concerned that if Microsoft were to post this as a patch to Windows XP/2003 then we would see the effects of this "all at once", instead of the gradual process of Vista deployment. Anyone agree?
You need both ends to have large buffers so TCP window sizes can grow. So a few possibilities: * you're running content servers but you're on training wheels and you're just not aware of this. Windows default sizes are small, so you never notice as you never grow enough TCP windows to fill your set buffer size. These guys would notice if Windows XP was patched to use larger/adaptive buffering. * You're cluey and already have window sizes tuned; these guys won't notice any client changes as they'll never negotiate a larger window. * You're cluey and have assumed X% of your customers (say, the ones >100ms away from you) have fixed their window sizes and hedge your bets on that. This group is analogous to network engineering based on current, not future use - possibly saving money in the short term, but probably going to fund the executive bonuses and not put away safely for days like what you're suggesting. * .. caveat to the above: until Linux goes and does what Linux does best and change system defaults; enabling adaptive socket buffers by default during a minor version increment. Anyone remember ECN? :P Then even some cluey server admins will cry in pain a little. * I don't think the proposals are changing TCP congestion avoidance/etc, are they? Its easily solvable - just drop the window sizes. In fact, I think the window size increase/adaptive window size stuff would be much more useful for P2P over LFN than average websites -> clients. General page HTTP traffic atm doesn't hit window size before the reply has completed. Sites serving larger content than HTML+images (say, Youtube, Music sites, etc) would've already given this some thought and fixed their servers to not run out of RAM so easily. Those are on a CDN anyway.. Its not just that; there's other things to worry about than just numsockets * (send size + receive size) memory possibly being consumed but its a good starting point. 2c, Adrian
Adrian Chadd wrote:
On Tue, Oct 23, 2007, Sam Stickland wrote:
I'm concerned that if Microsoft were to post this as a patch to Windows XP/2003 then we would see the effects of this "all at once", instead of the gradual process of Vista deployment. Anyone agree?
You need both ends to have large buffers so TCP window sizes can grow.
So a few possibilities:
* you're running content servers but you're on training wheels and you're just not aware of this. Windows default sizes are small, so you never notice as you never grow enough TCP windows to fill your set buffer size. These guys would notice if Windows XP was patched to use larger/adaptive buffering.
Yes. I was imagining a scenario where released patches mean that currently untuned servers and clients are suddenly adaptively tuning their TCP Window sizes. According to the Web100 website (www.web100.org), their automatic TCP buffer tuning has already been merged into mainline Linux kernels. If Microsoft release an XP patch that enabled all the Windows based clients out there to take advantage of this then there could be lot of surprised faces.
* .. caveat to the above: until Linux goes and does what Linux does best and change system defaults; enabling adaptive socket buffers by default during a minor version increment. Anyone remember ECN? :P Then even some cluey server admins will cry in pain a little.
Is the adaptive buffer tuning in Linux not enabled by default?
* I don't think the proposals are changing TCP congestion avoidance/etc, are they?
Not as far as I know.
Its easily solvable - just drop the window sizes. In fact, I think the window size increase/adaptive window size stuff would be much more useful for P2P over LFN than average websites -> clients. General page HTTP traffic atm doesn't hit window size before the reply has completed. Sites serving larger content than HTML+images (say, Youtube, Music sites, etc) would've already given this some thought and fixed their servers to not run out of RAM so easily. Those are on a CDN anyway..
True. It would still be interesting to know if Microsoft were planning on patches all XP boxes to support this anytime soon though ;) Sam
On 10/21/07, Leo Bicknell <bicknell@ufp.org> wrote:
Windows Vista, and next week Mac OS X Leopard introduced a significant improvement to the TCP stack, Window Auto-Tuning. FreeBSD is committing TCP Socket Buffer Auto-Sizing in FreeBSD 7. I've also been told similar features are in the 2.6 Kernel used by several popular Linux distributions.
Today a large number of consumer / web server combinations are limited to a 32k window size, which on a 60ms link across the country limits the speed of a single TCP connection to 533kbytes/sec, or 4.2Mbits/sec. Users with 6 and 8 MBps broadband connections can't even fill their pipe on a software download.
With these improvements in both clients and servers soon these systems may auto-tune to fill 100Mbps (or larger) pipes. Related to our current discussion of bittorrent clients as much as they are "unfair" by trying to use the entire pipe, will these auto-tuning improvements create the same situation?
I can see "advanced operating systems" consuming much more bandwidth in the near future then is currently the case, especially with the web 2.0 hype. In the not so distant future I imagine a operating system whose interface is purely powered by ajax, javascript and some flash with the kernel being a mix of a mozilla engine and the necessary core elements to manage the hardware. This "down to earth" construction of the operating system interface will allow it to potentially be offloaded onto a central server allowing for really quick seamless deployment of updates and security policies as well as reducing the necessary size of client machine hard drives. Not only this but it'd allow the said operating system to easily accept elements from web pages as replacements of core features or additions to already existent features (such as replacing the tray clock with a more advanced clock done in javascript that is on a webpage and whose placement could be done by a simple drag and drop of the code sniplet). Such integration would also open the possibility of applications being made purely of a mixture of various web elements from various webpages. Naturally such a operating environment would be much more intense with regards to its bandwidth consumption requirements but at the same time I can see this as reality in the near future....
On Mon, 22 Oct 2007 19:39:48 PDT, Hex Star said:
I can see "advanced operating systems" consuming much more bandwidth in the near future then is currently the case, especially with the web 2.0 hype.
You obviously have a different concept of "near future" than the rest of us, and you've apparently never been on the pushing end of a software deployment where the pulling end doesn't feel like pulling. I suggest you look at the uptake rate on Vista and various Linux distros and think about how hard it will be to get people to run something *really* different.
the operating system interface will allow it to potentially be offloaded onto a central server allowing for really quick seamless deployment of updates and security policies as well as reducing the necessary size of client machine hard drives. Not only this but it'd
I hate to say it, but Microsoft's Patch Tuesday probably *is* already pretty close to "as good as we can make it for real systems". Trying to do *really* seamless updates is a horrorshow, as any refugee from software development for telco switches will testify. (And yes, I spent enough time as a mainframe sysadmin to wish for the days where you'd update once, and all 1,297 online users got the updates at the same time...) Also, the last time I checked, operating systems were growing more slowly than hard drive capacities. So trying to reduce the size is really a fool's errand, unless you're trying to hit a specific size point (for example, once it gets too big to fit on a 700M CD, and you decide to go to DVD, there really is *no* reason to scrimp until you're trying to get it in under 4.7G). You want to make my day? Come up with a way that Joe Sixpack can *back up* that 500 gigabyte hard drive that's in a $600 computer (in other words, if that backup scheme costs Joe much more than $50, *it wont happen*).
On Tue, Oct 23, 2007 at 01:10:45AM -0400, Valdis.Kletnieks@vt.edu wrote:
You want to make my day? Come up with a way that Joe Sixpack can *back up* that 500 gigabyte hard drive that's in a $600 computer (in other words, if that backup scheme costs Joe much more than $50, *it wont happen*).
Having ratio issues? Offer a 'free' backup client so your customers can upload their bits somewhere else. If the disk is cheap enough and you come up with a backend system to distribute it appropriately this could easily become something valuable to offer to consumers. And if you're a network that pushes traffic in one way, a chance to 'even' it out. Good luck due to the lower upload speeds (how long to rsync that new 2G CF card of family photos @ 128k? - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
participants (7)
-
Adrian Chadd
-
Hex Star
-
Jared Mauch
-
Leo Bicknell
-
Mikael Abrahamsson
-
Sam Stickland
-
Valdis.Kletnieks@vt.edu