On Fri, 26 Oct 2001 10:06:13 PDT, Dave Siegel said:
We aren't talking about 5 ping packets as part of path MTU discovery. We aren't even talking about 5 ping packets sent as part of a ping triangulation in response to an http request.
We're talking about intentional measurement of a network, on a scale large enough to concern a network administrator.
The *original* posting was regarding 441 packets in two hours. That's WELL within the "as part of PMTU" or "triangulation" level. Remember that simple PMTU discovery is "measurement of a network" too. My point is simply that some people have a concept of "scale large enough to concern" that needs to be re-evaluated in today's Internet. People are just going to have to get used to the idea that in the future, there will likely be a 5% or 10% of overhead packets for any content transfer. Remember that IPv6 specifies PMTU Discovery as required, so you'll be seeing more of that in the future. Also, people are *vastly* overstating the "but Akamia/Digital Island./etc will fail to scale". Let's THINK about it for a moment: 1) If nobody from your site is contacting the provider in question, you don't see traffic - it's not worth it to them to probe if there's no traffic. As the original note from Digital Island said:
Our network was pinging your system because it appeared to be a name server with a sufficient number of resolution requests for our customer web sites to be placed on the list of network nodes to be constantly observed for Internet congestion.
OK? See? You don't *get* probed until the traffic is ALREADY at a threshold high enough that it's worth it. And that makes sense - 15 measurement packets for a 10-packet flow is just plain stupid. You dont want to measure until you have enough traffic to make it worth it. 2) If there *is* traffic sufficient to make it worth it, any intelligent measurement system will re-use the values for your net for every ADDITIONAL user - making it scale BETTER. So... let's say *everybody* deployed something like this. Let's say I start watching the traffic out of our 2 /16s. One of a VERY few things will happen: a) The other end has such a system, but our 2 /16s dont talk to it often enough to make it worth it. This has infinite scaleability. b) For *each* site we contact that *does* have enough traffic to make it worth it, we incur measurement. This *still* scales better than the actual traffic, because once we pass the threshold, our pipe has to get fatter to handle the data, but the measurement traffic is fixed. So (for instance) if the threshold is "enough traffic that measurement is 4% overhead", if our traffic doubles, the overhead is now 2%. This scales. The bottom line is that if you operate with just 2 rules: 1) You don't do measurement until you have a critical mass of traffic from one prefix, so the overhead at that traffic level is a known "reasonable" amount (say, 2% - it can't be TOO high in any case, simply because you don't want measurement traffic swamping any gains you make). 2) You use the measurement for *all* traffic from the prefix. you can easily show that for *any* amount of traffic, you will never go over the overhead listed in 1. So what was the "it doesn't scale" issue again? -- Valdis Kletnieks Operating Systems Analyst Virginia Tech