Re: Keynote/Boardwatch Internet Backbone Index A better test!!!
I'll try. Let's talk about them a bit one at a time.
Fine, I will point out a few specific problems with your methodology:
1. Although a variety of backbones is used, the study does not say which ones. Also, even though the study does point out the assymetrical routing of a web transavtion (hot-potato), it doesn't point out that the traffic being measured is a brief web request (which is dumped to the web server's backbone ASAP) answered by a long response (10KB in this case, dumped to the querier's backbone).
We're trying not to care here. I think the "point of view" entailed is more along the lines of: IF I had MY web server on a particular backbone, either by running a dedicated access link to my office from that backbone, or by taking advantage of co-location, or by actually having that backbone host and manage my web site on their server, what would my web site look like to the Internet universe - the people that are out there downloading pages. I think there is an ongoing attempt by netheads to reduce this to flows between trunk routers. We're looking at a backbone a bit more broadly. It is the cummulative total of your network, your connectivity to other networks, your peering, your people, your coffee pots. We don't want to be drawn down into it much beyond that. If I have a web site, using any of various means, connected or otherwise hosted on YOUR backbone, what will the performance look like to MY audience. Will there be differences/advantages to being on one backbone or another, with regards to the perceived performance or rapidy of the pages appearing. The answer to the latter appears to be yes.
2. The test used measures the responsiveness of a company's web servers, which is not necessarily reflective of the response their customers get. This test specifically measures traffic going "outbound", but suggests that this information is useful in determining a carrier for "inbound" traffic. This could be misleading; a web farm will have a lot more outbound traffic than inbound, and a dial-up only provider will have more inbound traffic than out.
Yes, it is being read as an indicator from the users perspective in the inverse direction. I suspect there IS a relationship, but we're a little unclear on how closely coupled that is. Certainly we're aware that traffic from a customer to a web site takes a completely different path than the almost always larger flow from the web site to the customer - this is largely the reason we dismiss PING and TRACEROUTE as almost totally useless for our purposes. Do the relative performances we are seeing for backbones translate into relative performances for, say, dialup customers connected to that network? I think we'll find yes here too and I don't really agree with the implication that there is no relationship here. But you are quite correct, that is NOT what we intended to measure.
3. The results show an average taken over 24 hours, ignoring the important "time of day" factor. Compuserve, for example, may be dog slow for three hours per day, and greatly over-provisioned for the rest of the day. In other words, if over a three-hour period samples took 15 seconds, but only took 2 seconds for the rest of the day, it would score better than a provider with a consistent 6-7 second rate.
This is actually a very good point and has caused no end of discussion internally. In fact, I think it is one of the more interesting things we have accomplished here. The press release of course posted average download times. We also graph standard deviations and from my perspective, this is an infinitely more interesting item, though harder to explain. The Internet is a very different place at 2:00 AM than at 2:00 PM, and in fact from moment to moment. There is a slightly larger standard deviation waveform that implies consistency across various metropolitan areas. The standard deviation measurements show us how consistently a network can deliver the same performance from one moment to the next, from one time of day to the next, and to a lesser degree, from one city to the next. We are going so far as to somewhat disingeniously lable this PERFORMANCE UNDER LOAD in an attempt to communicate this without a course in statistics.
4. The transfer rate of 10KB x 5 is not the same as the transfer rate of a 50KB file. If one backbone is significantly "burstier" than another, this could dramatically affect throughput. For instance, a 10KB file might easily go through a bursty or bouncy backbone in just a few seconds, while larger files require greater consistency.
True enough. But we applied the same methodology to all backbones equally. My sense of what makes up web pages is that they are rarely a solid 50 KB of anything, but a series of files with a text file, a smallish logo file, a couple of graphics files, etc. We think readers can relate to the 50 KB total, and scaled it so. But we think most are made up of a series of files between 5 and 20 KB. I would think "bursty" networks would be a plus in such an environement. What I think I'm hearing you say is that latency counts. I agree. I think it should, but again, we want to look at a communicable "whole page" concept for the test download. Characterizing just what a "typical" web page is is of course a rather loose business. We're pretty open to suggestions of specifically what the download would be like, but it does need to be reasonably of a least common denominator.
5. Some companies have more popular web pages than others. Few major providers hang servers directly off their backbone (whatever that might mean in this context), but rather have a lobe or two attaching their farm. Just because a provider's web farm is saturated or busy or slow, does not indicate that the rest of their backbone is.
The web server is operated by and under control of the network, as all other aspects are. The concensus here seems to be that we are primarily measuring web server performance and not the network itself. I'm trying to let you all gel around this as the main objection. In five days I think I can show you that it is an interesting theory, just not so.
Looking back, some of these errors reflect a misinterpretation of mine that the results were intended for consumers looking for a connection, as opposed to a place to host their web server. Most of these points still apply, however. And people will use the study results this way; if I can make that mistake, J. Random LUser could, too.
This assumes there is no correlation. I'm not sure I'm willing to concede that at this point. But in general, I can say that few of the national backbones emphasize their dialup offering to the home user. Most are pushing web hosting and dedicated access connections to businesses. If J Random LUser wants to shop for a NATIONAL BACKBONE to get his connection to, it does rather invert the concept. You are absolutely correct that it would be better, if that were the objective, to pick a dialup pop on each backbone, and measure to ALL the backbone web sites from a single point and then compare those results. Easy enough, but not really what we're after here. In fact, there is no dialup component to this at all, and I'm guessing dialup users would be more interested in accessibility and busy signals anyway. We do think it is quite interesting that the average delivery speed over the backbone is little, if any, beyond the bandwidth capabilities of the new 56K modems. It might appear that an extraordinary amount of resource is going toward upping the bandwidth to the home, when the perceptual "speediness" of the world wide web may not improve at all until the backbone performance increases.
Also, the links from the page to the graphs were diappointing. What units are measured in the "Best Value" chart? What does the price of a T1 have to do with web hosting?
We have rather lumped dedicated access in there wholesale. Web hosting pricing has no standards, it is a chaotic mix with some pricing by hits, some by storage, some by the amount of data passing, some by flat rate. The "basic element" of pricing seems to be the monthly recurring cost of a 1.544 Mbps T-1. The actual formulation is of course in the article. Basically we make the loose assertion that performance is the most important criteria when shopping for a connection. But it is rarely the case that "price is no object." We somewhat gratuitously came up with the 2/3 ratio. We sought a formula where 2/3 of the buying decision was made on performance, and 1/3 on price. What we are referring to as TOTAL PERFORMANCE is a simple multiplication of the average download times by the standard deviation. The average of THIS total performance measure across the 29 backbones was 492. The average monthly price of a T-1 was $2045. So we took each networks total perf and multiplied by 8 in an attempt to approximate a value of 4000. We then add the monthly price of a T-1 to that to derive a VALUE figure. Savvis Communications had the lowest (best) VALUE number. They scored well on performance, but their $1700 monthly price was considerably lower than CompuServe, AT&T WorldNet, UUNET, or even GridNet. We named them the best value on that basis.
All of the previous notwithstanding, I would be interested in a better version of this study. Even more interesting would be to track how providers do over the course of several studies--who responds well to backbone congestion?
We are caught between two missions here. In the first place, we are very receptive to suggested improvements to the study. I'm looking at proving/disproving how much of the "web server" component we are really measuring. And we've taken quite seriously the concept of providing each backbone provider a totally identical "test page" for example. I don't think the latter will seriously move any numbers, but as Forest Gump says "just one less thing...". At the same time, I think it will be intensely interesting to see how the numbers move from one issue of the directory to the next. We will have backbones seeking to score better, but at the same time, they will be under pressure from new Internauts and more web pages added to the Internet daily. Traffic growth appears to be continuing unabated. And the more consistently we measure from issue to issue, the more true those relative numbers will be comparing one measurement period to the next. Jack Rickard =================================================================== Jack Rickard Boardwatch Magazine Editor/Publisher 8500 West Bowles Ave., Ste. 210 jack.rickard@boardwatch.com Littleton, CO 80123 www.boardwatch.com Voice: (303)973-6038 ===================================================================
On Mon, 30 Jun 1997, Jack Rickard wrote:
We're trying not to care here. I think the "point of view" entailed is more along the lines of: IF I had MY web server on a particular backbone, either by running a dedicated access link to my office from that backbone, or by taking advantage of co-location, or by actually having that backbone host and manage my web site on their server, what would my web site look like to the Internet universe - the people that are out there downloading pages. I think there is an ongoing attempt by netheads to reduce this to flows between trunk routers. We're looking at a backbone a bit more broadly. It is the cummulative total of your network, your connectivity to other networks, your peering, your people, your coffee pots. We don't want to be drawn down into it much beyond that. If I have a web site, using any of various means, connected or otherwise hosted on YOUR backbone, what will the performance look like to MY audience. Will there be differences/advantages to being on one backbone or another, with regards to the perceived performance or rapidy of the pages appearing. The answer to the latter appears to be yes.
of course the answer is yes, but what *you* measured in *no way* relates to the performance of *customer* web service at these ISPs. you measured performance of the ISP web server itself , which every ISP so far has specifically stated is on a pretty *slow* part of their network. *this* is what has repeatedly been stated is a major flaw in your methodology.
4. The transfer rate of 10KB x 5 is not the same as the transfer rate of a 50KB file. If one backbone is significantly "burstier" than another, this could dramatically affect throughput. For instance, a 10KB file might easily go through a bursty or bouncy backbone in just a few seconds, while larger files require greater consistency.
True enough. But we applied the same methodology to all backbones equally. My sense of what makes up web pages is that they are rarely a solid 50 KB of anything, but a series of files with a text file, a smallish logo file, a couple of graphics files, etc. We think readers can relate to the 50 KB total, and scaled it so. But we think most are made up of a series of files between 5 and 20 KB. I would think "bursty" networks would be a plus in such an environement. What I think I'm hearing you say is that latency counts. I agree. I think it should, but again, we want to look at a communicable "whole page" concept for the test download.
Characterizing just what a "typical" web page is is of course a rather loose business. We're pretty open to suggestions of specifically what the download would be like, but it does need to be reasonably of a least common denominator.
here you reveal yet another glaring flaw in your methodology. you downloaded *different* web pages which could have had *different* numbers of elements in different combinations. a page with 3 images and a little html would generate 4 http requests, while a more complex page, of the same size in bytes, could be 12 images and a little html, thereby generating 13 http requests. each request increases load *outbound* (that part you chose to ignore) as well as server load...the end result being increased latency (the part you chose to measure). this *alone* completely invalidates your results because they cannot be correlated with each other. only by downloading the *exact* same page from these different web servers can you even begin to produce meaningful results.
5. Some companies have more popular web pages than others. Few major providers hang servers directly off their backbone (whatever that might mean in this context), but rather have a lobe or two attaching their farm. Just because a provider's web farm is saturated or busy or slow, does not indicate that the rest of their backbone is.
The web server is operated by and under control of the network, as all other aspects are. The concensus here seems to be that we are primarily measuring web server performance and not the network itself. I'm trying to let you all gel around this as the main objection. In five days I think I can show you that it is an interesting theory, just not so.
again, this is not simply an issue of network performance. you state earlier that you are trying to give information to customers who might be looking to *colocate* or outsource web service to an ISP. in that scenario, you have to measure performance of servers *at the web farms* not the server hanging off a T1 on some network backwater. positioning on the network is *essential* to what you are trying to measure.
We do think it is quite interesting that the average delivery speed over the backbone is little, if any, beyond the bandwidth capabilities of the new 56K modems. It might appear that an extraordinary amount of resource is going toward upping the bandwidth to the home, when the perceptual "speediness" of the world wide web may not improve at all until the backbone performance increases.
you mean the average speed *to the known to be badly positioned in the network server*. i am quite certain performance is dramatically faster to those amazing things called web farms. "what are web farms?" i hear you ask. well, web farms are these things filled with web servers at really good locations in the network where *paying customers* can locate their server or site. b3n
b3n wrote:
you mean the average speed *to the known to be badly positioned in the network server*. i am quite certain performance is dramatically faster to
Just out of curiosity, why do many (not all) of the large backbone providers establish their face to the web (their corporate webserver) on slow, badly positioned machines? I would have thought they would have chosen differently. We do, but we are not as sophisticated. I can see how Jack might have been confused, perhaps he is not so sophisticated also. Ken Leland Monmouth Internet
Just out of curiosity, why do many (not all) of the large backbone providers establish their face to the web (their corporate webserver) on slow, badly positioned machines?
Because it is done by marketing, not engineering? randy
Probably because you'll find that co-located web servers make the company more money than if they put their server in the same spot or on the same machine. Web sites for providers make them money, sure, but definitely not as much as if they hosted mtv.com or other popular sites. So, they put the high traffic sites where they need to be, and leave their own servers on another part of the network, as they probably are one of the least utilized machines there. This isn't in all cases, and blanket statements are dangerous. But, in the majority of cases I've seen, someone like UUNet's web site, or sprint's isn't nearly as busy as www.playbow.com, www.comedycentral.com, etc. Joe Shaw - jshaw@insync.net NetAdmin - Insync Internet Services "Learn more, and you will never starve." - Paraphrase of Lee On Mon, 30 Jun 1997, Ken Leland wrote:
Just out of curiosity, why do many (not all) of the large backbone providers establish their face to the web (their corporate webserver) on slow, badly positioned machines? I would have thought they would have chosen differently. We do, but we are not as sophisticated. I can see how Jack might have been confused, perhaps he is not so sophisticated also.
Ken Leland Monmouth Internet
participants (5)
-
Ben Black
-
Jack Rickard
-
Joe Shaw
-
Ken Leland
-
randy@psg.com