
On Fri, 27 Jun 1997, Jack Rickard wrote:
I don't think I'm missing it. I think I'm disagreeing with it in as nice and nonconfrontational a way as I can given the crappy personality I have
apparently your definition of nonconfrontational includes calling people morons. i think i will expand my definition of "editor" to include clueless network engineer wannabes.
to work from. Splitting hairs from here to infinity on what "network" means and what the world wide web is departs rather widely from my mission here, so I'm giving it short shrift. If you don't know how ping and traceroute vary from data flows, I can't help much there either.
since you obviously don't know a thing about how things like peering, NAPs, IP routing, and all the other components of network engineering work, i this it humorous.
If you want to draw a line of demarcation between a network and its performance, and a web server and its performance, you're free to do so. I just probably won't buy into it.
and we probably wouldn't either. but since that isn't what anyone is doing, how is this relevant?
On the actual concept that changing all the web servers will move the numbers: It might. It might not. I would probably bet at this point that there will be a lot of that going on among the non-moron crowd. I'm kind of hoping for it anyway. And then we'll see if the numbers move. My sense is that they will move some, and not as much as most seem to think. But it's true it could go the other way and be dramatic. I'm open to whatever results derive.
so you are hoping backbone providers move their own home page web servers in order to skew a severely limited and obviously bogus benchmark? if it is as easy as that to change the results, don't you think perhaps there is something radically wrong with your methodology? wouldn't that seem to indicate this so-called benchmark isn't really testing what it purports to? if keystone just said "we are testing how long it takes to download a random page from provider home pages" then there wouldn't furor. instead, the claim is made that this somehow indicates the overall health and performance of provider backbones. that is utter nonsense.
Jack Rickard
From: Justin W. Newton <justin@priori.net> To: Jack Rickard <jack.rickard@boardwatch.com>; Stan Barber <sob@academ.com>; vaden@texoma.net; SEAN@SDG.DRA.COM; nanog@merit.edu Subject: Re: Internet Backbone Index Date: Friday, June 27, 1997 2:50 PM
Jack, I believe that you are missing the point that measuring web server response time is /not/ the equivalent of measuring backbone performance.
At 12:45 PM 6/27/97 -0600, Jack Rickard wrote:
They could be. The attempt is to factor that out. ALL measuring agents applied to ALL the backbones. And all contributed more or less equally to the end numbers. If a particular agent ran on a Commodore 64 with a kluged copy of KA9Q, and another agent ran on an Sun Solaris, both results would go into the result pile for all 29 measured networks. The net effect would be that the flaw would be in our "footprint" from which the measurements were taken. This footprint can only be a rough approximation of end user distribution anyway. It would affect absolute values relative to zero, but the relative indexes between networks should not be affected. Since we're looking at the relative relationship primarily, it wouldn't appear important.
From: Stan Barber <sob@academ.com> To: Justin W. Newton <justin@priori.net>; Larry Vaden <vaden@texoma.net>; Sean Donelan <SEAN@SDG.DRA.COM>; nanog@merit.edu Subject: Re: Internet Backbone Index Date: Friday, June 27, 1997 1:54 PM
Justin writes:
ObAboutTopic: This is possibly the most flawed study on the planet. Remind me to get a fast web server. (And to think, we were going to
Jack Rickard ---------- put
our web server in our office, behind a T-1, instead of in real estate near where the real bandwidth is that could be used for customers.).
There are many studies more flawed. Consider some of the studies that the Tobacco Institute has released over the years about the affects of smoking.
Concerning Internet performance, there have always been a variety of ways of measuring it. It all depends on what you are really trying to measure. The Keynote study is attempting to measure something to which the average
Internet user (not engineers) can relate. However, There are also clearly the possibility of artifacts in the data because of the testing machine's
TCP stack or other issues (Vern Paxson has covered these issues at NANOG and IETF meetings over the last few years). Checking their web site, their software appears to run on top of the TCP stacks of many systems, so
---------- the
known artifacts of some of these platforms could be an issue.
-- Stan | Academ Consulting Services |internet: sob@academ.com Olan | For more info on academ, see this |uucp: {mcsun|amdahl}!academ!sob Barber | URL- http://www.academ.com/academ |Opinions expressed are only mine.
********************************************************* Justin W. Newton voice: +1-415-482-2840 Senior Network Architect fax: +1-415-482-2844 PRIORI NETWORKS, INC. http://www.priori.net Director At Large, ISP/C http://www.ispc.org "The People You Know. The People You Trust." *********************************************************