The following should be attributed to Gene Shklar: Please set for 80 columns.
Thanks for the suggestions. We're considering doing a follow-up and perhaps making this a regular feature every 2 or 4 months thereafter. We've received a few suggestions for methodology changes/enhancements, and also several emails so far denouncing our methodology but not explaining why (which is typical of people in many areas -- politics, the environment, economics, whatever -- who disagree emotionally but not intellectually with the conclusions of a study).
The current methodology generally shows how a web site connected to a particular backbone appears to the general internet population of users. The results are intended to be a guide (but not the only one) for helping web sites select or evaluate a collocation, hosting, or access provider.
Fine, I will point out a few specific problems with your methodology: 1. Although a variety of backbones is used, the study does not say which ones. Also, even though the study does point out the assymetrical routing of a web transavtion (hot-potato), it doesn't point out that the traffic being measured is a brief web request (which is dumped to the web server's backbone ASAP) answered by a long response (10KB in this case, dumped to the querier's backbone). 2. The test used measures the responsiveness of a company's web servers, which is not necessarily reflective of the response their customers get. This test specifically measures traffic going "outbound", but suggests that this information is useful in determining a carrier for "inbound" traffic. This could be misleading; a web farm will have a lot more outbound traffic than inbound, and a dial-up only provider will have more inbound traffic than out. 3. The results show an average taken over 24 hours, ignoring the important "time of day" factor. Compuserve, for example, may be dog slow for three hours per day, and greatly over-provisioned for the rest of the day. In other words, if over a three-hour period samples took 15 seconds, but only took 2 seconds for the rest of the day, it would score better than a provider with a consistent 6-7 second rate. 4. The transfer rate of 10KB x 5 is not the same as the transfer rate of a 50KB file. If one backbone is significantly "burstier" than another, this could dramatically affect throughput. For instance, a 10KB file might easily go through a bursty or bouncy backbone in just a few seconds, while larger files require greater consistency. 5. Some companies have more popular web pages than others. Few major providers hang servers directly off their backbone (whatever that might mean in this context), but rather have a lobe or two attaching their farm. Just because a provider's web farm is saturated or busy or slow, does not indicate that the rest of their backbone is. Looking back, some of these errors reflect a misinterpretation of mine that the results were intended for consumers looking for a connection, as opposed to a place to host their web server. Most of these points still apply, however. And people will use the study results this way; if I can make that mistake, J. Random LUser could, too. Also, the links from the page to the graphs were diappointing. What units are measured in the "Best Value" chart? What does the price of a T1 have to do with web hosting? All of the previous notwithstanding, I would be interested in a better version of this study. Even more interesting would be to track how providers do over the course of several studies--who responds well to backbone congestion?
Gene Shklar GeneShklar@keynote.com Keynote Systems, Inc. voice (415) 524-3011
Lee Lee Howard Internet Systems Engineer (703)208-5231 UUNET High-speed Install lhoward@uu.net Do I speak for UUNET? [NOT IN ANY WAY]
"A great Internet application experience is all a matter of customer perspective."
---------- From: Peter Cole[SMTP:Peter.Cole@telescan.com] Sent: Friday, June 27, 1997 10:57 AM To: nanog@merit.edu Cc: marketing@keynote.com Subject: RE: Keynote/Boardwatch Internet Backbone Index A better test!!!
I would like to see the test run again with the following change.
From each provider test the response time of the other 28 sites and not the providers own web server. Then average the response times for these other 28 web servers and report that average response time from that provider. The providers with good connectivity to the rest of the net should have lower average response time.
P.S. One might also be interested in the top one hundred web sites average response time.
Peter Cole of Telescan, Inc. (281)588-9155 Better computing through lack of sleep.
---------- From: Golan Ben-Oni[SMTP:bnite@tremere.ios.com] Sent: Thursday, June 26, 1997 3:53 PM To: nanog@merit.edu Subject: Keynote/Boardwatch Internet Backbone Index
For shits and grins:
http://www.keynote.com/measures/backbones/backbones.html
-Golan