Paul A Vixie <paul@vix.com> writes:
the fratricide thing someone else mentioned earlier today.
That would be Frank Kastenholz, who I am pleased to discover slumming here.
this hurts our benchmark numbers but helps the backbones (where i came from) and the origin servers (where some of my friends are). quite the dilemma.
Frankly, the backbones could care less these days. Heavily decorated micropackets are becoming less and less toxic; at least one implementation is known to smile and ask for more at OC12 rates, another has hardware that can probably do this too. Magic flow-based switching schemes that open VCs and so forth might be happier, but I don't know of any actually deployed in a "backbone" per se. Tli was just pointing out n messages ago that no matter how well you do in terms of aggregating data traffic into bigger chunks, you still will see an enormous number of small packets around (ACKs). You have to be prepared to switch those at line rate; engineering for some statistical mix of big and small packets is asking for a disaster when someone suddenly goes simplex. There is, however, the spectre of there being so many SYNs flying around that they alone might cause congestion collapse. I dunno if I should be frightened of that or not, but I am not one of your origin server friends. --:) Finally, could your explain the "benchmark" comment a bit? Sean.
There is, however, the spectre of there being so many SYNs flying around that they alone might cause congestion collapse. I dunno if I should be frightened of that or not, but I am not one of your origin server friends. --:)
i'm not worried about the syn's so much as i am worried about the lack of interstream resource planning. in all of the popular desktop stacks, a new tcp stream does its own slow start (not paying any attention to the aggegrate bandwidth*delay when several streams are open to the same origin server). this means every new tcp session has to sense the available bandwidth*delay, causing the other tcp sessions toward the same origin to have to back off and try to find the new equilibrium. and then, wonder of wonders, it's time to close all of those connections because the user has clicked "stop" after getting bored waiting for thos GIFs to populate, and has clicked on something else, so let's start this whole stupid process over again with some other origin server. persistent http helps this a little. aggregation through proxies -- even if no caching is done -- will help it a little more. t/tcp would help some. desktop tcp stack fixes to remember end-to-end bandwidth*delay between connections, and to treat end-to-end bandwidth*delay as an aggregate to be shared between simultaneous connections from/to the same place (or to just stop doing that stupid parallelism in favour of one http/1.1 persistent connection) would also help.
Finally, could your explain the "benchmark" comment a bit?
this was in specific reference to my product's connection quota for each origin server, and the fact that if we intercept too many simultaneous connections to a given origin server, we just delay the ones for which no open connection is available.
At 09:50 PM 2/7/98 -0800, Paul A Vixie wrote:
There is, however, the spectre of there being so many SYNs flying around that they alone might cause congestion collapse. I dunno if I should be frightened of that or not, but I am not one of your origin server friends. --:)
i'm not worried about the syn's so much as i am worried about the lack of interstream resource planning. in all of the popular desktop stacks, a new tcp stream does its own slow start...
This isn't a desktop stack issue this is a server stack issue. even though the d.t. opens the connections, the server is the one sending all the data and the server is the one doing all the slowstart, ca/c, etc, etc.
At 09:11 PM 2/7/98 -0800, Sean M. Doran wrote:
Tli was just pointing out n messages ago that no matter how well you do in terms of aggregating data traffic into bigger chunks, you still will see an enormous number of small packets around (ACKs). You have to be prepared to switch those at line rate; engineering for some statistical mix of big and small packets is asking for a disaster when someone suddenly goes simplex.
some of the histograms i've seen show close to 50% of the packets being 40 bytes long. the 'desired' tcp behavior is to have no more than 2 data packets for every ack (since congestion control uses ack-reception to pace the transmission of data and try to quickly detect losses).
There is, however, the spectre of there being so many SYNs flying around that they alone might cause congestion collapse. I dunno if I should be frightened of that or not
you should be. not because of the packet-load it causes (as tony pointed out, you have to be able to move 40-byte packets at 'fiber speed') but because it's a symptom of lots of short-lived tcp connections. these connections never get out of slowstart. when there is only a small number of them, it's not important. when there is a large number of them, you have large, non-congestion-controlled, data flows. it's called being nibbled to death by mice.
participants (3)
-
Frank Kastenholz
-
Paul A Vixie
-
Sean M. Doran