There are other techniques such as split-TCP and Snoop that can be applied in place of a POP based Web cache. On the other hand, RAS/NAS servers that are under-buffered are not part of the solution space. trenchingly yours, peter
-----Original Message----- From: Phil Howard [SMTP:phil@charon.milepost.com] Sent: Friday, February 06, 1998 7:32 PM To: ltd@interlink.com.au Cc: nanog@merit.edu Subject: Re: MTU of the Internet?
perhaps this is one of the not-so-obvious benefits of running a web proxy cache such as squid. the greater internet can have larger packets floating around, and the local proxy of the ISP can deal with horrible tcp stacks, retransmissions and client machine with small receive buffer sizes.
And imagine having 2 interfaces on this machine, one with MTU=1500 and one with MTU=576.
-- Phil Howard | no33ads4@dumb5ads.org ads6suck@dumb2ads.org stop3it7@spammer9.com phil | no0way72@lame7ads.net die8spam@no99ads1.net eat7this@anyplace.com at | w5x8y4z5@anyplace.com stop2it1@spammer0.com a9b0c8d8@noplace4.com milepost | die6spam@noplace8.com end6it79@no2place.com a5b0c5d3@s0p8a2m1.edu dot | crash053@anyplace.org crash981@spam6mer.com end8it77@dumbads7.org com | no20ads6@anyplace.org stop6221@dumb3ads.com ads9suck@no7place.net
perhaps this is one of the not-so-obvious benefits of running a web proxy cache such as squid. the greater internet can have larger packets floating around, and the local proxy of the ISP can deal with horrible tcp stacks, retransmissions and client machine with small receive buffer sizes.
what we did in our transparent web cache was to always try to use persistence when talking to origin servers, fix everything we could fix in our TCP stack, and use a quota so that we would only talk to the same origin server N times in parallel. this means when clients disconnect from what they think of as the origin server after 15 seconds of inactivity, and then (happens a lot!) reconnect and grab something else, their requests are interleaved on one of our persistent connections to the origin server. this also means that if too many clients try (doesn't happen often) to use our quota of connections to an origin server, some of them have to wait for a slot on one of our persistent connections. we will ultimately time out or LRU our origin connections but while we have them open, TCP's window size and RTT estimates are more accurate than when a bazillion new connections keep coming up and doing slow start and the fratricide thing someone else mentioned earlier today. this hurts our benchmark numbers but helps the backbones (where i came from) and the origin servers (where some of my friends are). quite the dilemma.
participants (2)
-
Paul A Vixie
-
Peter Ford