
Ok, so I said I wasn't going to comment on the methodology, I lied. I wouldn't say the Keynote study is the worst ever. There are some really rotten studies in the fields of psychology and sociology. Since the Boardwatch/Keynote study didn't 'test' DRA Net, I guess I'm one of the few independent, disinterested parties to comment on the study's methods. A problem with the Keynote study is it seems very dependent on the location, type and connections of the testing platforms. Keynote mentions that connections from Dallas and Phoenix were slow to 'every' backbone site. This would indicate some systematic problem with the testing sites. Perhaps the results are even more dependent on the testing systems than the systems under test. There are also problems with outlier data points. For example, elsewhere on the Keynote site, the MCI web site had very fast access from 28 test sites (< 4secs), and very slow access from one test site in philly (> 14secs). Mixing and matching data points, if you left out the one outlier data point, MCI would have been faster than Savvis. So I don't know if the rankings are very meaningful if a single test site can have such a pronounced effect. Unlike a scientific study, there doesn't seem to be enough information to independently reproduce the results, so I'm just going from the bits and pieces I can glean from the Keynote pages, press release, and article.
Concerning Internet performance, there have always been a variety of ways of measuring it. It all depends on what you are really trying to measure. The Keynote study is attempting to measure something to which the average Internet user (not engineers) can relate. However, There are also clearly the possibility of artifacts in the data because of the testing machine's TCP stack or other issues (Vern Paxson has covered these issues at NANOG and IETF meetings over the last few years). Checking their web site, their software appears to run on top of the TCP stacks of many systems, so the known artifacts of some of these platforms could be an issue. -- Sean Donelan, Data Research Associates, Inc, St. Louis, MO Affiliation given for identification not representation

On Fri, 27 Jun 1997, Sean Donelan wrote:
Ok, so I said I wasn't going to comment on the methodology, I lied. I wouldn't say the Keynote study is the worst ever. There are some really rotten studies in the fields of psychology and sociology.
Since the Boardwatch/Keynote study didn't 'test' DRA Net, I guess I'm one of the few independent, disinterested parties to comment on the study's methods.
A problem with the Keynote study is it seems very dependent on the location, type and connections of the testing platforms. Keynote mentions that connections from Dallas and Phoenix were slow to 'every' backbone site. This would indicate some systematic problem with the testing sites. Perhaps the results are even more dependent on the testing systems than the systems under test. There are also problems with outlier data points. For example, elsewhere on the Keynote site, the MCI web site had very fast access from 28 test sites (< 4secs), and very slow access from one test site in philly (> 14secs). Mixing and matching data points, if you left out the one outlier data point, MCI would have been faster than Savvis. So I don't know if the rankings are very meaningful if a single test site can have such a pronounced effect.
-- Sean Donelan, Data Research Associates, Inc, St. Louis, MO Affiliation given for identification not representation
I would really like to know how Boardwatch can continually say Savvis is a national backbone provider when they peer with nobody, and that is part of their business plan, and only buy transit from the big 5. Then they neglect to list DRA as a backbone provider, when DRA appears at many major exchanges and peers with damn near everyone under the sun from what I can tell. DRA has a high speed cross country network and many international locations as well. Savvis claims to have a cross country backbone, but that only currently runs from Chicago to Dallas through St Louis, how on earth is that a national backbone? ============================================================== Tim Flavin Internet Access for St Louis & Chicago Internet 1st, Inc Toll Free Sales & Support 800-875-3173 http://www.i1.net For more information email info@i1.net ==============================================================

Sean, Do you have a pointer to the raw data? I couldn't find it on the site. -scott At 04:25 PM 6/27/97 -0500, Sean Donelan wrote:
Ok, so I said I wasn't going to comment on the methodology, I lied. I wouldn't say the Keynote study is the worst ever. There are some really rotten studies in the fields of psychology and sociology.
Since the Boardwatch/Keynote study didn't 'test' DRA Net, I guess I'm one of the few independent, disinterested parties to comment on the study's methods.
A problem with the Keynote study is it seems very dependent on the location, type and connections of the testing platforms. Keynote mentions that connections from Dallas and Phoenix were slow to 'every' backbone site. This would indicate some systematic problem with the testing sites. Perhaps the results are even more dependent on the testing systems than the systems under test. There are also problems with outlier data points. For example, elsewhere on the Keynote site, the MCI web site had very fast access from 28 test sites (< 4secs), and very slow access from one test site in philly (> 14secs). Mixing and matching data points, if you left out the one outlier data point, MCI would have been faster than Savvis. So I don't know if the rankings are very meaningful if a single test site can have such a pronounced effect.
Unlike a scientific study, there doesn't seem to be enough information to independently reproduce the results, so I'm just going from the bits and pieces I can glean from the Keynote pages, press release, and article.
Concerning Internet performance, there have always been a variety of ways of measuring it. It all depends on what you are really trying to measure. The Keynote study is attempting to measure something to which the average Internet user (not engineers) can relate. However, There are also clearly the possibility of artifacts in the data because of the testing machine's TCP stack or other issues (Vern Paxson has covered these issues at NANOG and IETF meetings over the last few years). Checking their web site, their software appears to run on top of the TCP stacks of many systems, so the known artifacts of some of these platforms could be an issue. -- Sean Donelan, Data Research Associates, Inc, St. Louis, MO Affiliation given for identification not representation
participants (3)
-
Scott Huddle
-
Sean Donelan
-
Tim Flavin