Re: Keynote/Boardwatch Internet Backbone Index A better test!!!
From: Craig A. Huegen <c-huegen@quadrunner.com> To: Jack Rickard <jack.rickard@boardwatch.com> Cc: Peter Cole <Peter.Cole@telescan.com>; nanog@merit.edu; marketing@keynote.com Subject: Re: Keynote/Boardwatch Internet Backbone Index A better test!!! Date: Friday, June 27, 1997 2:08 PM
On Fri, 27 Jun 1997, Jack Rickard wrote:
==>Not an entirely whacky concept actually. The way hot potato routing works, ==>this could actually be a "purer" test I suspect of the network internally ==>and a purer test of connectivity of any network to all others cum Internet.
The article says that you're measuring backbone provider performance, yet you're including:
* carrier's web server location * carrier's web server performance
==>Keynote does a "Top 40" study of 40 popular web sites and I believe
==>make the results available on their web site. It is interesting to observe ==>performance variations of the network as a whole over time. Other
This assumes that you consider web server location and web server performance to NOT be a part of overall network performance. Our view steps back a bit from that. The majority of traffic would appear to be webcentric. From an end users perspective, what does a web site on a specific network look like and how does that compare to a web site on another network? There are ENDLESS variables contributing to that including intercity links, hub architecture, host hardware, host software, peering, connectivity points with other networks, transit agreements, type of routers, ATM switching (or not). All contribute. We think most people notice Internet performance (or lack thereof) while viewing world wide web pages. If we measure such page transits, the results are indicative of the accumulation of ALL of those factors. The web sites chosen were on the network under study, operated by the network, and under their control. They have total control over the hardware and software used, how it is connected to their network, just as they have control over all other aspects of their networks. Does web server performance affect the results? I would hope so. Can we break it down into what is purely web server hardware performance, what is web server software performance, what is NIC card on the web server, what is the impact of the first router the web server is connected to, what is the impact of hub design and the interface between IP routing and ATM switching, what part is the impact of interconnections with other networks, what part is peering, what part is just goofy router games? Uh,,, NO we can't. I would posit that it is only the network engineers at the heart of this that would or should care. I don't know at this point what portion of the equation can be levied on web servers and I don't think anyone else can either. I have held for several years that the performance breakdown is in the "last inch" of the Internet between the drive controller and the disk surface. But in working with Keynote, I generated the broad theory that if that view held true, then by massively averaging measurements across time and geography, we should flatline the Internet. In other words, all results should factor to zero relatively. They didn't. They didn't to a shocking degree. And at this point I am under the broad assumption that server performance doesn't account for all of it, perhaps little of it. But I could be widely wrong on the entire initial assumption. In any event, the networks have total control and responsibility for their own web servers, much as they do for their own network if you define that as something separate from their networks. We measured web page downloads from an end user perspective, and those are the results in aggregate. If it leads to a flurry of web server upgrades, and that moves the numbers, we'll know more than we did. If it leads to a flurry of web server upgrades, and it FAILS to move the numbers, that will tell us something as well. Our broad theory is that nothing is going to improve as long as anything you do doesn't count and is not detectable by anyone anywhere. If a particular network can move their results in any fashion, that is an improvement in the end user experience, however achieved. Warmest Regards; Jack Rickard =================================================================== Jack Rickard Boardwatch Magazine Editor/Publisher 8500 West Bowles Ave., Ste. 210 jack.rickard@boardwatch.com Littleton, CO 80123 www.boardwatch.com Voice: (303)973-6038 =================================================================== ---------- they than
==>that, we don't have much interest in it. It is indicative of no specific ==>network, but of the Internet in general.
/cah
Jack Rickard writes:
webcentric. From an end users perspective, what does a web site on a specific network look like and how does that compare to a web site on another network?
Ok, so that's what you have (possibly -- there are many variables) measured. So if I want to have a web site hosted on a shared web server, then this measurement might possibly be useful to me. But if I am looking for someone to sell my company a T1 connection, and I want to do my own web hosting, then this information would be nearly useless. There are many factors which would affect the performance of a provider's web site but which would have no effect whatsoever on the performance of a T1 connection from the provider. For example: - the hardware the web server runs on - the software and its configuration on the web server - the load on the web server - the characteristics of the LAN to which the web server is connected The report says "Keynote decided to measure a backbone provider's own public web server on the assumption that the provider would locate its own server in the best-performing hosting location for that provider." Do all the providers listed even provide web-hosting as a service? All this makes the title "Internet Backbone Index" very misleading. If you had called it "Internet Web Service Index" that would be a much better description of what you have measured. Catherine Foulston cathyf@rice.edu Rice University Network Management p.s. I don't work for any of the listed providers.
On Fri, 27 Jun 1997, Jack Rickard wrote: ==>This assumes that you consider web server location and web server ==>performance to NOT be a part of overall network performance. Our view But insofar as the article goes, the stated intent was to measure BACKBONE PERFORMANCE, not backbone web server performance. There is a HUGE difference between the two. ==> I would hope so. Can we break it down into what is purely web server ==>hardware performance, what is web server software performance, what is NIC ==>card on the web server, what is the impact of the first router the web ==>server is connected to, what is the impact of hub design and the interface ==>between IP routing and ATM switching, what part is the impact of ==>interconnections with other networks, what part is peering, what part is ==>just goofy router games? Uh,,, NO we can't. You *can*, however, come up with a better methodology to attach to your stated intent behind the study; or, if you care to leave your methodology, clear up the misconceptions that your readers will take in. ==>results should factor to zero relatively. They didn't. They didn't to a ==>shocking degree. And at this point I am under the broad assumption that ==>server performance doesn't account for all of it, perhaps little of it. ==>But I could be widely wrong on the entire initial assumption. I would challenge that assumption that it accounts for little. The machine the web server is running on, combined with the OS, load average, and even down to the web server software, probably makes up a very good portion of any delays you may have seen. How many times do you go to a web site and see "Host contacted... awaiting response"? When you see that, you have made the network connection and have given your query. Any time you see that at the bottom, it's usually indicative of web server delay. (There is a possibility of packet loss in the initial sent query, but I'd venture to state that it's a very small percentage of queries made to web servers). ==>In any event, the networks have total control and responsibility for their ==>own web servers, much as they do for their own network if you define that ==>as something separate from their networks. We measured web page downloads ==>from an end user perspective, and those are the results in aggregate. If ==>it leads to a flurry of web server upgrades, and that moves the numbers, ==>we'll know more than we did. If it leads to a flurry of web server ==>upgrades, and it FAILS to move the numbers, that will tell us something as ==>well. But again, if I were in the business to provide nationwide network service for my customers, and provided my web site as a marketing tool (like most companies out there), I would architect my network so that the customer comes first. The web site could be used for information about the business, but isn't A-1 critical to operation of the network. I'd side with the priori.net folks here in their architecture; that the web server really shouldn't be put into a pop. ==>Our broad theory is that nothing is going to improve as long as anything ==>you do doesn't count and is not detectable by anyone anywhere. If a ==>particular network can move their results in any fashion, that is an ==>improvement in the end user experience, however achieved. But the results you publish don't match the study's intent. /cah
I want to measure the response of each ISP's marketing department. So I will send a certified letter to their office of record in Deleware, and report how long before I get the receipt back. Maybe I well do this for 30 letters to get better data. So, we now have an interesting report of the accessibility of various ISPs' web servers from Jack's house. While it is trivial to point out how useless and devoid of meaning this is, what is a wee bit harder is to describe what would be a useful measure to Jack's audience and how that might be obtained. Ben Black's pointing out http://www.inversenet.com/about/backgrounder.html#2 may be *very* apt, and somewhat ingenious, considering Jack's audience. Are there other good ideas or examples? randy
In any event, the networks have total control and responsibility for their own web servers, much as they do for their own network if you define that as something separate from their networks. We measured web page downloads from an end user perspective, and those are the results in aggregate. If it leads to a flurry of web server upgrades, and that moves the numbers, we'll know more than we did. If it leads to a flurry of web server upgrades, and it FAILS to move the numbers, that will tell us something as well.
So, these providers should be wasting their time to upgrade the web server that their own site sits on JUST to come out better in a survey?? Just by hinting at a flurry of web server upgrades you're pointing to the fact that that issue plays at least a small part in the results. Even a small variance like that can throw th results off to a large degree when the results are aggregated (like the outlying 14.x versus rest were 4.x results that was mentioned earlier by someone - memory lapses as to the name at this point.) I would certainly hope that these providers have better things to do than upgrade their own web server just to prove a point through a study which seems to be biased away form its intentions. Like someone else mentioned, the study says it is measuring backbone performance, when in fact all that is being measured is how fast web pages load. Come on... -- -Myk Myk O'Leary (System Administrator) --> moleary@ironlight.com Ironlight Digital (Marketing/Design/Network) --> http://www.ironlight.com 222 Sutter Street 6th floor * San Francisco, CA 94108 * 415.646.7000 ------ FOR NETWORK PROBLEMS, WRITE TO tech-support@ironlight.com ------
Jack, Your study proves that if a user connects to the network, he can expect better response time surfing www.uu.net than he could expect from surfing www.mci.com. While this is interesting, this has little or nothing to do with the network performance of either uu.net or mci.net. If your goal is to report on the performance that a dedicated user might expect from uu.net or mci.net, become a customer of either network and test the response time to the top 100 web sites in the world. Better yet, use your publication to print measurement software that your readers might use to test each network and report the results to you. If I had published the article that you just published, I don't know if I could backpeddle fast enough. Your method is flawed and your results are misleading. Jeff Young young@mci.net
Return-Path: owner-nanog@merit.edu Received: from merit.edu (merit.edu [198.108.1.42]) by postoffice.Reston.mci.net (8.8.5/8.8.5) with ESMTP id QAA25593; Fri, 27 Jun 1997 16:53:58 -0400 (EDT) Received: from localhost (daemon@localhost) by merit.edu (8.8.5/8.8.5) with SMTP id QAA26583; Fri, 27 Jun 1997 16:35:35 -0400 (EDT) Received: by merit.edu (bulk_mailer v1.5); Fri, 27 Jun 1997 16:34:59 -0400 Received: (from majordom@localhost) by merit.edu (8.8.5/8.8.5) id QAA26533 for nanog-outgoing; Fri, 27 Jun 1997 16:34:59 -0400 (EDT) Received: from boardwatch.com (ipad1.boardwatch.com [204.144.169.1]) by merit.edu (8.8.5/8.8.5) with ESMTP id QAA26529 for <nanog@merit.edu>; Fri, 27 Jun 1997 16:34:54 -0400 (EDT) Received: from ws38.boardwatch.com ([199.33.229.38]) by boardwatch.com with ESMTP (IPAD 1.52) id 2051600 ; Fri, 27 Jun 1997 14:35:56 EST From: "Jack Rickard" <jack.rickard@boardwatch.com> To: "Craig A. Huegen" <c-huegen@quadrunner.com>, <nanog@merit.edu>, <marketing@keynote.com> Subject: Re: Keynote/Boardwatch Internet Backbone Index A better test!!! Date: Fri, 27 Jun 1997 12:38:02 -0600 X-MSMail-Priority: Normal X-Priority: 3 X-Mailer: Microsoft Internet Mail 4.70.1155 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Message-Id: <199706271835.2051600@boardwatch.com> Sender: owner-nanog@merit.edu Content-Type: text/plain; charset=ISO-8859-1 Content-Length: 4887
This assumes that you consider web server location and web server performance to NOT be a part of overall network performance. Our view steps back a bit from that. The majority of traffic would appear to be webcentric. From an end users perspective, what does a web site on a specific network look like and how does that compare to a web site on another network? There are ENDLESS variables contributing to that including intercity links, hub architecture, host hardware, host software, peering, connectivity points with other networks, transit agreements, type of routers, ATM switching (or not). All contribute. We think most people notice Internet performance (or lack thereof) while viewing world wide web pages. If we measure such page transits, the results are indicative of the accumulation of ALL of those factors. The web sites chosen were on the network under study, operated by the network, and under their control. They have total control over the hardware and software used, how it is connected to their network, just as they have control over all other aspects of their networks. Does web server performance affect the results? I would hope so. Can we break it down into what is purely web server hardware performance, what is web server software performance, what is NIC card on the web server, what is the impact of the first router the web server is connected to, what is the impact of hub design and the interface between IP routing and ATM switching, what part is the impact of interconnections with other networks, what part is peering, what part is just goofy router games? Uh,,, NO we can't.
I would posit that it is only the network engineers at the heart of this that would or should care. I don't know at this point what portion of the equation can be levied on web servers and I don't think anyone else can either. I have held for several years that the performance breakdown is in the "last inch" of the Internet between the drive controller and the disk surface. But in working with Keynote, I generated the broad theory that if that view held true, then by massively averaging measurements across time and geography, we should flatline the Internet. In other words, all results should factor to zero relatively. They didn't. They didn't to a shocking degree. And at this point I am under the broad assumption that server performance doesn't account for all of it, perhaps little of it. But I could be widely wrong on the entire initial assumption.
In any event, the networks have total control and responsibility for their own web servers, much as they do for their own network if you define that as something separate from their networks. We measured web page downloads from an end user perspective, and those are the results in aggregate. If it leads to a flurry of web server upgrades, and that moves the numbers, we'll know more than we did. If it leads to a flurry of web server upgrades, and it FAILS to move the numbers, that will tell us something as well.
Our broad theory is that nothing is going to improve as long as anything you do doesn't count and is not detectable by anyone anywhere. If a particular network can move their results in any fashion, that is an improvement in the end user experience, however achieved.
Warmest Regards;
Jack Rickard
=================================================================== Jack Rickard Boardwatch Magazine Editor/Publisher 8500 West Bowles Ave., Ste. 210 jack.rickard@boardwatch.com Littleton, CO 80123 www.boardwatch.com Voice: (303)973-6038 ===================================================================
From: Craig A. Huegen <c-huegen@quadrunner.com> To: Jack Rickard <jack.rickard@boardwatch.com> Cc: Peter Cole <Peter.Cole@telescan.com>; nanog@merit.edu; marketing@keynote.com Subject: Re: Keynote/Boardwatch Internet Backbone Index A better test!!! Date: Friday, June 27, 1997 2:08 PM
On Fri, 27 Jun 1997, Jack Rickard wrote:
==>Not an entirely whacky concept actually. The way hot potato routing works, ==>this could actually be a "purer" test I suspect of the network internally ==>and a purer test of connectivity of any network to all others cum Internet.
The article says that you're measuring backbone provider performance, yet you're including:
* carrier's web server location * carrier's web server performance
==>Keynote does a "Top 40" study of 40 popular web sites and I believe
==>make the results available on their web site. It is interesting to observe ==>performance variations of the network as a whole over time. Other
---------- they than
==>that, we don't have much interest in it. It is indicative of no specific ==>network, but of the Internet in general.
/cah
I think the main point that many here are trying to make is that hitting a ISP/NSP's web server for their www.<name>.net is not the most effective web site on their network to test. The 'best location' on the network for websites are reserved for their customers. Richard D. White, Shift Mgr. - Business Connectivity Technical Support whiter@digex.net - 301.847.6278 (direct) - 301 847 6215 (fax) 24-hour Support Line - 301 847 5200 DIGEX, INC On Fri, 27 Jun 1997, Jack Rickard wrote:
This assumes that you consider web server location and web server performance to NOT be a part of overall network performance. Our view steps back a bit from that. The majority of traffic would appear to be webcentric. From an end users perspective, what does a web site on a specific network look like and how does that compare to a web site on another network? There are ENDLESS variables contributing to that including intercity links, hub architecture, host hardware, host software, peering, connectivity points with other networks, transit agreements, type of routers, ATM switching (or not). All contribute. We think most people notice Internet performance (or lack thereof) while viewing world wide web pages. If we measure such page transits, the results are indicative of the accumulation of ALL of those factors. The web sites chosen were on the network under study, operated by the network, and under their control. They have total control over the hardware and software used, how it is connected to their network, just as they have control over all other aspects of their networks. Does web server performance affect the results? I would hope so. Can we break it down into what is purely web server hardware performance, what is web server software performance, what is NIC card on the web server, what is the impact of the first router the web server is connected to, what is the impact of hub design and the interface between IP routing and ATM switching, what part is the impact of interconnections with other networks, what part is peering, what part is just goofy router games? Uh,,, NO we can't.
I would posit that it is only the network engineers at the heart of this that would or should care. I don't know at this point what portion of the equation can be levied on web servers and I don't think anyone else can either. I have held for several years that the performance breakdown is in the "last inch" of the Internet between the drive controller and the disk surface. But in working with Keynote, I generated the broad theory that if that view held true, then by massively averaging measurements across time and geography, we should flatline the Internet. In other words, all results should factor to zero relatively. They didn't. They didn't to a shocking degree. And at this point I am under the broad assumption that server performance doesn't account for all of it, perhaps little of it. But I could be widely wrong on the entire initial assumption.
In any event, the networks have total control and responsibility for their own web servers, much as they do for their own network if you define that as something separate from their networks. We measured web page downloads from an end user perspective, and those are the results in aggregate. If it leads to a flurry of web server upgrades, and that moves the numbers, we'll know more than we did. If it leads to a flurry of web server upgrades, and it FAILS to move the numbers, that will tell us something as well.
Our broad theory is that nothing is going to improve as long as anything you do doesn't count and is not detectable by anyone anywhere. If a particular network can move their results in any fashion, that is an improvement in the end user experience, however achieved.
Warmest Regards;
Jack Rickard
=================================================================== Jack Rickard Boardwatch Magazine Editor/Publisher 8500 West Bowles Ave., Ste. 210 jack.rickard@boardwatch.com Littleton, CO 80123 www.boardwatch.com Voice: (303)973-6038 ===================================================================
From: Craig A. Huegen <c-huegen@quadrunner.com> To: Jack Rickard <jack.rickard@boardwatch.com> Cc: Peter Cole <Peter.Cole@telescan.com>; nanog@merit.edu; marketing@keynote.com Subject: Re: Keynote/Boardwatch Internet Backbone Index A better test!!! Date: Friday, June 27, 1997 2:08 PM
On Fri, 27 Jun 1997, Jack Rickard wrote:
==>Not an entirely whacky concept actually. The way hot potato routing works, ==>this could actually be a "purer" test I suspect of the network internally ==>and a purer test of connectivity of any network to all others cum Internet.
The article says that you're measuring backbone provider performance, yet you're including:
* carrier's web server location * carrier's web server performance
==>Keynote does a "Top 40" study of 40 popular web sites and I believe
==>make the results available on their web site. It is interesting to observe ==>performance variations of the network as a whole over time. Other
---------- they than
==>that, we don't have much interest in it. It is indicative of no specific ==>network, but of the Internet in general.
/cah
participants (7)
-
Catherine Anne Foulston
-
Craig A. Huegen
-
Jack Rickard
-
Jeff Young
-
moleary@ironlight.com
-
randy@psg.com
-
Richard White