(see www.keynote.com) there seem to be some, uh, `interest' in keynote info last time it came up so just fyi (i.e., does not consititute endorsement or dismissal of content) [randy, why don't we just meet at ietf so you can main me in person for posting this rather than wasting nanog b/w on it] k ----------------------------------------------------------------- Backbones Wheel and Deal to Keep Net Moving by Gene Koprowski Some Internet wags claim that the initials WWW actually stand for "World Wide Wait." And according to a recent study, they might be more right than ever. Research being released by San Mateo, California-based Keynote Systems Inc. indicates that the technical performance of the Web has "degraded" by 4.5 percent since spring. Congestion at access points like MAE East and MAE West is so severe that in order to maintain acceptable performance, Internet backbone providers are continually having to monitor the traffic of the Net and change their routing assignments through new peering agreements, says Gene Shklar, vice president of Keynote, a diagnostic services consultancy that works with Web hosting companies. Despite claims of "increased bandwidth" advertised by many Internet backbone providers, performance is not keeping up with perception or demand. In fact, the average speed that content was traveling on the Internet, as of September, was just 5,000 characters per second, or only 40 Kbps. So cable modems, satellite modems, and other fast-transit technologies are not performing as advertised most of the time, and regional routing infrastructure disparities have a lot to do with that. Cities like Atlanta, Miami, and Dallas are suffering from slower access than San Francisco or Boston, Shklar says. The Keynote Systems' report was based on tests that used dedicated workstations for 45 days - once every 15 minutes - to determine the backbone network speeds. "The study is based on millions of measurements of what matters most to Web sites, which is how long it takes them to deliver content to their end users," says Shklar. "If you have to commute to work each day, it doesn't matter how well-engineered the road you are driving on is. What matters is how long it takes you to get to your destination." Most of the performance problems are out in the network, not on the individual Web sites. The problems tend to occur just as they do in the highway system: at the on-ramps and off-ramps. If the US had one homogeneous Internet network, the system would work quite flawlessly. But, Shklar notes, there are 47 different Internet backbone providers in America alone and 4,300 ISPs. "Any kind of average Web transaction from server to user ends up crossing at least three transit points," he says. "Each of the backbone providers is very good at running their own network. But they don't care at all about making optimal routing decisions based on the performance that the users will get." The report indicated that Internet backbone provider Savvis Corp., based in St. Louis, Missouri, had the fastest average page download time of all major providers. The time was 4.905 seconds per page. The runner-up was Cable & Wireless with a 5.008 average, followed by CompuServe (5.664) and UUNET (5.912). AT&T, by comparison, ranked 15th with 8.559 seconds for an average download, and Netcom came in at 31, with a time of 15.181 seconds for a page download, Keynote indicated. Overall, 34 backbone purveyors were surveyed. Savvis president and CEO Sam Sanderson said that the report - which will be published in Boardwatch magazine on Friday - indicated that Internet backbone providers would have to continue to purchase connections to large networks rather than peer at public network access points in order to maintain their level of service. "We're going to continue that strategy," Sanderson says. Brian Robertson, chief technical officer at PlanetAll, a Web site that helps friends locate missing pals and colleagues over the Internet, said the report was "interesting" and he agrees that the private network concept is going to continue so that electronic commerce will be able to flourish. "There are a lot of times when a site looks like it is down on the Web, but if you ping it from another location, it is OK," says Robertson. "The idea is to keep the traffic off the Internet and avoid the big switches, like MAE East and MAE West. If you have a bunch of redundant connections, get the packet of data off the Internet onto a separate network, run it through a SONET ring to the next closest data center. You generally try to minimize the number of hops of data across the network. That kind of service will have to continue for Web commerce to survive."
Although the recent spam debate has provided me enough fodder for the humor mill, when I went to check out this faux backbone performance index, I was humored by clicking on the link to #10 (dataXchange) Seemed to take me somewhere unbecoming of a backbone provider :) Cheers to keynote for another exemplary listing. brad reynolds ber@cwru.edu "Faith: not wanting to know what is true." -- Friedrich Nietzsche
I attended a [XIWT] meeting recently at which the CEO of Keynote made some very interesting, direct and candid statement with regards to what he feels about the applicability (NOT!) of his measurements for ISP performance ranking published. Given I am in overlapping space with Keynote and may have skewed hearing, I wonder if Tracy Monk (CAIDA) or others who were in the meeting can try paraphrasing what he said. Regards, John Leong
have gotten too many comments/questions about my take on the keynote thing for which i posted a reference yesterday (emphasizing that i was neither endorsing nor dismissing it) ---------------------------------------- (THIS MESSAGE HAS NOTHING YOU SHOULD PUT NEAR YOUR ROUTER, and is Statistically Unlikely to change your behavior in the least so i would put it off till major downtime beyond your control if you bother reading at all) ---------------------------------------- i admit i was avoiding extending judgement (you all do it so well you don't need me) but apparently some of you think i should (extend) -- and since a lot of you turn out to be paying my salary, who am i to disobey (and since randy's already procmail'ed me out perhaps i'm not putting any more appendages at risk. heck i would assume half the list filters out anything with 'keynote' in subject by now maybe noone's reading this anyway yum yum) here goes (warning, go get coffee, it feels like the millionth time this has been said, sorry, blame the guys that asked for it): (you can also grep for 'Bottom Line' below, (but do it twice, i have 2, natch) and skip the rest. or if you already know the question, my answer's: No. there. you're done. you can owe me the 5 minutes i just saved you) the Keynote stuff as originally done would have been an interesting set of data and methodologies to Start with and help toward bridging a crevice-fast-becoming-a-canyon (i.e, the gap between Internet O and R) i reckon there's a strong contingent in the community that perceives the keynote stuff as just dumping landfill in that canyon won't argue frankly think it doesn't matter in the long term you all (ISPs) have survived worse; you'll survive keynote those making a fuss over it seem to me consistently missing the point which manages to get Blissfully Ignored: that the Keynote study got more attention (yes, controversial, so?) than any other study (or ISP, or vendor, or protocol) that i can remember (yes partly cuz boardwatch took the data and went inference-aerobic on us but keynote didn't exactly try to stop them; PR is PR when you're selling media, yum yum) oh my whatever could it mean is it possible that enough Internet users (and even some of you ISPs; you're so cute) are so desperate for a standard, professional, neutral metric/methodology for measuring performance/workload/qos (wth that is) that you'll give anything a shot you can find? ( gosh aren't you guys as bored of hearing this stuff as i am of saying it? ) but wait, it gets better: while i'm off gallavanting around the Internet on your tax dollars (don't worry i'm not getting many of them) trying to bring some folks together and start behaving like we actually have some common inter/cross-ISP geek-compatible verifiability goals, (trying to start CAIDA, extend NLANR, get OC3MON's out there, draw pictures to show how bad the macroscopic stuff continues to get, etc) -- these keynote/boardwatch folks actually went off and did Real Pinging to really sexy sounding FQDNs, and hired lots of folks to make flashy web pages/charts/moving.gifs, write fluffy white papers and sell them for lots of money to ISPs, who apparently have enough to buy (and fund someone to complain about) the paper but not to think about the underlying issues and do something constructive about it (or even contribute to those trying to) there was [is] a total Market Void there, guys; that it was filled is pretty econ 101-ish, ne? why is everyone continuing to act so flustered by it? shouldn't we have rather been surprised that (1) it didn't happen sooner, and/or (2) noone else has come in and tried to do something `better' or charge less what did we all think, that this market would be Different from Others? that we could just send a letter to some editor and say 'oh, no, you can't measure it that way, that's not stastically valid; you have to wait for us to figure out htf to do this' don't think so, guys. (and if you're expecting it to come from us research geeks then you're not paying us fast enuf, nor giving us enough data/access/warm.and.fuzzy.feelings/help) hey, come to think of it, wasn't the whole point of XIWT so you could do this yourself? as John said, XIWT invited this Keynote fellow to speak (i think the word Tracie used to me was 'mind-bending') as they say in their web pages, they apparently measure 1 or 2 isps in a city, then draw inferences about `the city's connectivity' to the Net. (``er, does the word Topology mean anything to you?'' ``uh, the market we're targeting wouldn't understand that stuff; it would only confuse them.'') cool. i wonder if that's how we sell nuclear weapons. yes, keynote did articulate some durable insights (`try to minimize the number of hops of data across the network' -- mmmm) oh and be sure not to miss their 'discoveries about the internet' http://www.keynote.com/measures/top10.html, neato-cool, with goodies like Keynote's measurements demonstrate that most of their performance problems occur out in the Internet's infrastructure somewhere between the web site and its users (whoa, that's really going out on a limb) .... CompuServe, CWIX, SAVVIS, and other less-well-known backbone providers currently offer some of the fastest Internet backbone services. We believe that's because their backbones are relatively underutilized compared to those of larger providers such as Sprint and InternetMCI. Large, capacious, well-engineered backbones (just like most of our freeways) can perform more slowly than others if they have to service more customers at once. (quick, everyone change over to the underutilized backbone! that will fix everything!) Internet brownouts...produce measurable performance degradation for all users. However, the Internet still continues to operate, albeit more sluggishly, while the accident damage is being repaired and traffic is re-routed around the accident scene. (even with all those contradicting Battered Gagging Provably suboptimal roadsigns -- 's amazin', isn't it) 9. Internet performance improves substantially during the day on holiday periods such as Christmas and Thanksgiving. We believe that's because users spend time on other activities instead of surfing the Net from home or office. (do you feel like you're reading your horoscope yet?) 10. Network engineers at backbone providers tend to focus on optimizing traffic flow within their own networks. They tend to de-emphasize or ignore connectivity and end-to-end response time to users on other networks. The Internet, however, is an interconnection of many backbones and private networks, with the result that users rarely access web sites that are directly connected to the same backbone that they are. ..... (it's them silly network engineers again what'r they thinkin'?) on the (waaaay) other hand, i got really tired of listening to people bash boardwatch for how inaccurate their ISP catalog thing was (``kc, c'mmmmoooooon, bad data is worse than none at all.'' pshaw. put up or shut up; the world's not waiting for you) without that database CAIDA couldn't have done http://www.caida.org/Tools/Mapnet/ which (at least java-enabled) folks really seem to get a kick out of. some of them providers, even. (no, no, of course not randy. let's not get crazy here.) and to do this, yes, i used jack's sub-accurate data. (he even gave me permission) yes, some of it's wrong yes, some backbones might not even be there (especially new or stealth ones) and backbone topology info's changing under our noses, so even if it's right today it's probably wrong tomorrow bummer. sounds like we can never get accurate data; we just shouldn't do anything at all. BZZZZZT. survey saaays: 0. guess what?! there's a button you can send in your correct data to CAIDA if you want to be able to see the Real Thing (yes, oob-authenticated). [kc points to water; pauses dramatically.] so yeah their work has accuracy problems. maybe even more than mine (good thing they get paid more; i think having problems is such Haarrrrd work -- ) -- so what. i'll dare/pay someone else to do better. oh, there's another thing you may wanna lose sleep over -- this city-framed measurement methodology thing (which measures at best an isp's site-specific web hosting capabilities, Not backbone performance (sorry jack)) might actually be more defensible (yeah yeah, i know, nowhere to go but up) if/as we move toward this gigapop model thing (if you have to ask you don't wanna know) but then they'd [someone'd] have to do a little more than measure a single machine at an ISP's web hosting facility if they wanted a chance at real insight so Bottom Line, ftr: No, i'm Not Endorsing the keynote or boardwatch backbone performance comparison studies and especially not the marketing spin/web pages. At All. ok? (or the netmedic stuff. or anything else i see out there. but i'm not quite unbiased.) on the other hand, there's a whole lotta people out there apparently willing to pay those guys a lot more than they're willing to pay me (well depends on their tax bracket i guess) so who am i to gripe. lots of people buy windows95 too (and lots gripe. with Absolutely Zero effect on that fellow's profit margin. shocking.) Bottomer Line: (ok we're low enuf to scrape barrel now) there's no free lunch if you want to be able to tell your customers how you're doing you're gonna have to be able to measure it which means you're gonna have to define what to measure (or pay someone you trust to do so) otherwise, fasten your seatbelts, i guarantee lots more low-flying misleading benchmarks from the i[nd|ll]ustrious Keynotes, Boardwatches and Netmedics of the world designing charming products and services to meet the needs of suits and corporate types with no clue (nor interest in) how the Internet works (i hate broken records i hate broken records i hate broken records more than smd hates analogies smd hates analogies smd hates analogies): unless you chaps mobilize, insights re how the Internet operates are going to continue to be defined by faulty methodologies. anything else requires serious OperationsResearchAgenda-driven Cooperation among ISPs and this Very Idea still seems to be slightly less appetizing to them than spinach is to 8 year olds (if it makes you feel any better i'm not planning on going anywhere, but then after this message who else would hire me) k
participants (3)
-
Bradley Reynolds
-
k claffy
-
leongļ¼ inversenet.com