On Mon, 30 Jun 1997, Jack Rickard wrote:
We're trying not to care here. I think the "point of view" entailed is more along the lines of: IF I had MY web server on a particular backbone, either by running a dedicated access link to my office from that backbone, or by taking advantage of co-location, or by actually having that backbone host and manage my web site on their server, what would my web site look like to the Internet universe - the people that are out there downloading pages. I think there is an ongoing attempt by netheads to reduce this to flows between trunk routers. We're looking at a backbone a bit more broadly. It is the cummulative total of your network, your connectivity to other networks, your peering, your people, your coffee pots. We don't want to be drawn down into it much beyond that. If I have a web site, using any of various means, connected or otherwise hosted on YOUR backbone, what will the performance look like to MY audience. Will there be differences/advantages to being on one backbone or another, with regards to the perceived performance or rapidy of the pages appearing. The answer to the latter appears to be yes.
of course the answer is yes, but what *you* measured in *no way* relates to the performance of *customer* web service at these ISPs. you measured performance of the ISP web server itself , which every ISP so far has specifically stated is on a pretty *slow* part of their network. *this* is what has repeatedly been stated is a major flaw in your methodology.
4. The transfer rate of 10KB x 5 is not the same as the transfer rate of a 50KB file. If one backbone is significantly "burstier" than another, this could dramatically affect throughput. For instance, a 10KB file might easily go through a bursty or bouncy backbone in just a few seconds, while larger files require greater consistency.
True enough. But we applied the same methodology to all backbones equally. My sense of what makes up web pages is that they are rarely a solid 50 KB of anything, but a series of files with a text file, a smallish logo file, a couple of graphics files, etc. We think readers can relate to the 50 KB total, and scaled it so. But we think most are made up of a series of files between 5 and 20 KB. I would think "bursty" networks would be a plus in such an environement. What I think I'm hearing you say is that latency counts. I agree. I think it should, but again, we want to look at a communicable "whole page" concept for the test download.
Characterizing just what a "typical" web page is is of course a rather loose business. We're pretty open to suggestions of specifically what the download would be like, but it does need to be reasonably of a least common denominator.
here you reveal yet another glaring flaw in your methodology. you downloaded *different* web pages which could have had *different* numbers of elements in different combinations. a page with 3 images and a little html would generate 4 http requests, while a more complex page, of the same size in bytes, could be 12 images and a little html, thereby generating 13 http requests. each request increases load *outbound* (that part you chose to ignore) as well as server load...the end result being increased latency (the part you chose to measure). this *alone* completely invalidates your results because they cannot be correlated with each other. only by downloading the *exact* same page from these different web servers can you even begin to produce meaningful results.
5. Some companies have more popular web pages than others. Few major providers hang servers directly off their backbone (whatever that might mean in this context), but rather have a lobe or two attaching their farm. Just because a provider's web farm is saturated or busy or slow, does not indicate that the rest of their backbone is.
The web server is operated by and under control of the network, as all other aspects are. The concensus here seems to be that we are primarily measuring web server performance and not the network itself. I'm trying to let you all gel around this as the main objection. In five days I think I can show you that it is an interesting theory, just not so.
again, this is not simply an issue of network performance. you state earlier that you are trying to give information to customers who might be looking to *colocate* or outsource web service to an ISP. in that scenario, you have to measure performance of servers *at the web farms* not the server hanging off a T1 on some network backwater. positioning on the network is *essential* to what you are trying to measure.
We do think it is quite interesting that the average delivery speed over the backbone is little, if any, beyond the bandwidth capabilities of the new 56K modems. It might appear that an extraordinary amount of resource is going toward upping the bandwidth to the home, when the perceptual "speediness" of the world wide web may not improve at all until the backbone performance increases.
you mean the average speed *to the known to be badly positioned in the network server*. i am quite certain performance is dramatically faster to those amazing things called web farms. "what are web farms?" i hear you ask. well, web farms are these things filled with web servers at really good locations in the network where *paying customers* can locate their server or site. b3n