
http://biz.yahoo.com/djus/021113/0217000178_2.html -- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben -- -- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --

Since it seems to be public, no harm in sharing it. http://biz.yahoo.com/djus/021113/1031000599_1.html I am sure a lot of customer will feel better. Stronger balance sheet means Switch and Data will be a definite survivor and so will PAIX. Looks like we're coming back to a peering location consolidation again. Equinix and S&D (PAIX) will be the new peering exchanges. Question is, outside of 6 exchanges domestically, what scenario would force a move to doubling that to 12. Long haul circuits rising again, or perhaps some new killer app. Right now seems domestically 6 may be all we need. -- David Diaz dave@smoton.net [Email] pagedave@smoton.net [Pager] Smotons (Smart Photons) trump dumb photons

Equinix and S&D (PAIX) will be the new peering exchanges.
I hate to think how many exchange points that leaves out. Telehouse and Terramark come to mind. Even if there are some dominant players, domestic neutral exchange points are still a diverse, vibrant market.
Question is, outside of 6 exchanges domestically, what scenario would force a move to doubling that to 12. Long haul circuits rising again, or perhaps some new killer app. Right now seems domestically 6 may be all we need.
I'm putting the number closer to 40 (the "NFL cities") right now, and 150 by the end of the decade, and ultimately any "metro" with population greater than 50K in a 100 sq Km area will need a neutral exchange point (even if it's 1500 sqft in the bottom of a bank building.) -- Paul Vixie

I'm putting the number closer to 40 (the "NFL cities") right now, and 150 by the end of the decade, and ultimately any "metro" with population greater than 50K in a 100 sq Km area will need a neutral exchange point (even if it's 1500 sqft in the bottom of a bank building.)
What application will require this dense peering? Pete

On Thu, Nov 14, 2002 at 10:00:48AM +0200, Petri Helenius scribbled: | | > I'm putting the number closer to 40 (the "NFL cities") right now, and | > 150 by the end of the decade, and ultimately any "metro" with population | > greater than 50K in a 100 sq Km area will need a neutral exchange point | > (even if it's 1500 sqft in the bottom of a bank building.) | | What application will require this dense peering? To power the IPv6 networks of refridgerators, ovens, and light switches, as well as your 3G video conferencing phone | | Pete -------------------------

Michael C. Wu wrote:
On Thu, Nov 14, 2002 at 10:00:48AM +0200, Petri Helenius scribbled: | | > I'm putting the number closer to 40 (the "NFL cities") right now, and | > 150 by the end of the decade, and ultimately any "metro" with population | > greater than 50K in a 100 sq Km area will need a neutral exchange point | > (even if it's 1500 sqft in the bottom of a bank building.) | | What application will require this dense peering?
To power the IPv6 networks of refridgerators, ovens, and light switches, as well as your 3G video conferencing phone
All of the above combined don't generate bandwith even near what a current generation peer2peer file sharing client does. The mentioned applications are not really delay sensitive to the sub-20ms range either. Pete

Thus spake "Michael C. Wu" <keichii@iteration.net>
On Thu, Nov 14, 2002 at 10:00:48AM +0200, Petri Helenius scribbled: | | > I'm putting the number closer to 40 (the "NFL cities") right now, and | > 150 by the end of the decade, and ultimately any "metro" with
| > greater than 50K in a 100 sq Km area will need a neutral exchange
population point
| > (even if it's 1500 sqft in the bottom of a bank building.) | | What application will require this dense peering?
To power the IPv6 networks of refridgerators, ovens, and light switches, as well as your 3G video conferencing phone
None of these applications have any requirement for peering every 100km2. I'd expect my refrigerator, oven, light switches, etc. to be behind my house's firewall and only talk using link-local addresses anyways. Try again. S

PV> Date: 14 Nov 2002 05:14:30 +0000 PV> From: Paul Vixie [ re number of US exchange points ] DD> Right now seems domestically 6 may be all we need. PV> I'm putting the number closer to 40 (the "NFL cities") right PV> now, and 150 by the end of the decade, and ultimately any PV> "metro" with population greater than 50K in a 100 sq Km area PV> will need a neutral exchange point (even if it's 1500 sqft in PV> the bottom of a bank building.) Are we discussing: 1) locations primarily for peering between large carriers, or 2) carrier hotels including virtually all providers, where cheap faste/gige peering runs are easily justified? If #1, I agree with David. In the case of the latter, I think I see what Paul is saying. IMESHO, local/longhaul price imbalance and the growth of distributed hosting {would|will} help fuel the smaller exchanges. Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature. These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.

Well thanks for the agreement Ed. Philosophically, I agree with Paul. I think 40 exchange points would be a benefit. At this time though, there is no model that would support it. 1) Long haul circuits are dirt cheap. Meaning distance peering becomes more attractive. L3 also has an MPLS product so you pay by the meg. I am surprised a great many peers are using this. But apparently CFOs love it 2) There is a lack of a killer app requiring peering every 100 sq Km. VoIP might be the app. Seems to be gaining a great deal of traction. Since it's obvious traffic levels would sky rockets, and latency is a large concern, and there is a need to connect to the local voice TDM infrastructure, local exchanging is preferred. However, many VoIP companies claim latency right now is acceptable and they are receiving no major complaints. So we are left to guess at other killer apps, video conferencing, movie industry sending movies online directly to consumers etc. 3) In order to get to the next level of peering exchanges... from 6 major locations to 12.... we are going to need the key peers in those locations. Many dont want to manage that growing complexity for diminishing returns, as well as the increased cost in equipment. Perhaps it's up to the key exchange companies to tie fabrics together allowing new (tier2 locations) to gain visibility to peers at other larger locations. This would allow peers at the larger locations to engage in peering discussions, or turn ups, and when traffic levels are justified a deployment to the second location begins. Problem with new locations are 'chicken and the egg.' Critical mass must be achieved before there is a large value proposition for peers. And to everyone that emailed me with their "we also are an exchange email." Yes, I readily admit there are other companies doing peering besides the ones I mentioned. I just did a quick post so I did not list every single exchange company. So you have my apologies, and I wont even hold it against you all that you were sales people.... dave At 9:52 +0000 11/14/02, E.B. Dreger wrote:
PV> Date: 14 Nov 2002 05:14:30 +0000 PV> From: Paul Vixie
[ re number of US exchange points ]
DD> Right now seems domestically 6 may be all we need.
PV> I'm putting the number closer to 40 (the "NFL cities") right PV> now, and 150 by the end of the decade, and ultimately any PV> "metro" with population greater than 50K in a 100 sq Km area PV> will need a neutral exchange point (even if it's 1500 sqft in PV> the bottom of a bank building.)
Are we discussing:
1) locations primarily for peering between large carriers, or 2) carrier hotels including virtually all providers, where cheap faste/gige peering runs are easily justified?
If #1, I agree with David. In the case of the latter, I think I see what Paul is saying. IMESHO, local/longhaul price imbalance and the growth of distributed hosting {would|will} help fuel the smaller exchanges.
Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature.
These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.
-- David Diaz dave@smoton.net [Email] pagedave@smoton.net [Pager] Smotons (Smart Photons) trump dumb photons

Thus spake <fkittred@gwi.net>
On Thu, 14 Nov 2002 10:22:09 -0500 David Diaz wrote:
2) There is a lack of a killer app requiring peering every 100 sq Km.
David;
I recommend some quality time with journals covering South Korea, broadband, online gaming and video rental.
Current peering levels suggest that it is cheaper to haul the traffic to a dozen or so points than it is to manage peering at several hundred points. Wasting bandwidth is a lot cheaper than wasting humans. S

Wired covered several of these topics in their August issue. http://www.wired.com/wired/archive/10.08/korea.html The article points out several subtle, yet fundamental, changes that happen socially and psychologically once the broadband network is available everywhere, to virtually everyone, all the time. We have yet to experience this in the US. I suspect that when it happens, it will be much different than we expect it to be, technically and otherwise. We still have to remember that for all the hype about the Internet, the killer app is still email and instant messenging. The "killer apps" on Internet2 (video conferencing, digital libraries, media-rich collaboration), which give some indication of what the future killer app will be, seem to be equally mundane (but exciting at the same time). Pete. On Thu, 14 Nov 2002 fkittred@gwi.net wrote:
On Thu, 14 Nov 2002 10:22:09 -0500 David Diaz wrote:
2) There is a lack of a killer app requiring peering every 100 sq Km.
I recommend some quality time with journals covering South Korea, broadband, online gaming and video rental.

Still seems that none of these "requires" peering every 100 km. Latency is still not a factor in this case. People seem to prefer cost of quality at this time. Good Fast Cheap Pick any two. As far as digital libraries and content and such... proxies and caches would fill the roll here. Akamai content servers or caches fed by something akin to Cidera's satellite feed to your caches [sitting on your network] would fill the need quite nicely. Local peering has 2 benefits right now: 1) reducing network costs (transit and backbone band) 2) decreasing latency Right now these two benefits are in not a factor in the present environment in my opinion.... At 10:22 -0700 11/14/02, Pete Kruckenberg wrote:
Wired covered several of these topics in their August issue.
http://www.wired.com/wired/archive/10.08/korea.html
The article points out several subtle, yet fundamental, changes that happen socially and psychologically once the broadband network is available everywhere, to virtually everyone, all the time. We have yet to experience this in the US. I suspect that when it happens, it will be much different than we expect it to be, technically and otherwise.
We still have to remember that for all the hype about the Internet, the killer app is still email and instant messenging. The "killer apps" on Internet2 (video conferencing, digital libraries, media-rich collaboration), which give some indication of what the future killer app will be, seem to be equally mundane (but exciting at the same time).
Pete.
On Thu, 14 Nov 2002 fkittred@gwi.net wrote:
On Thu, 14 Nov 2002 10:22:09 -0500 David Diaz wrote:
2) There is a lack of a killer app requiring peering every 100 sq Km.
I recommend some quality time with journals covering South Korea, broadband, online gaming and video rental.
-- David Diaz dave@smoton.net [Email] pagedave@smoton.net [Pager] Smotons (Smart Photons) trump dumb photons

On Thu, 14 Nov 2002 12:36:54 -0500 David Diaz wrote:
People seem to prefer cost of quality at this time.
Good Fast Cheap
Honey, part of our success is that I don't accept the above. Sooner or later, you will have to compete with someone who believes: "Good Fast Cheap we do all three." When one really knows one's field, it is possible to design simple systems. In the networking world, the qualities simple bring are: Good (reliable), Fast, Cheap. In a field where people think that it is perfectly acceptable to run MPLS through a NAT connected via PPP over Ethernet over ATM, it isn't harder to be simpler than the competition. good luck, fletcher

Anyone that calls me honey is in question. It's standard, you cant have everything in life. You attempt to achieve all three however it's all relative. You can have a DSL line now instead of a T1, it's fast and cheap but most arent as good as a T1 and the SLAs arent there right? Usually you either build your network to one of two designed: Avg sustained traffic levels, or Peak traffic levels. Avg sustained means that during peak times you might have higher latency, but that you have not over bought capacity... Peak Traffic design means you looked at your max burst levels and bought enough capacity etc to handle that load. The rest of the time you have excess capacity, but your QoS is always great. You will have a higher costs basis here. We all strive for all three, I used to almost try a TDM approach. Where I would try and balance business day users, residential users, and backup service users on the network. They used 3 distinct time frames during the day. In this way the network was rarely idle, but each type of users peak time was not in conflict with another. So if you would like to say you can sell me an OC48 at Aleron's (or fill in the ISPs name) IP backbone quality, at Cogent's pricing of less then $30meg... great... At 9:55 -0500 11/15/02, fkittred@gwi.net wrote:
On Thu, 14 Nov 2002 12:36:54 -0500 David Diaz wrote:
People seem to prefer cost of quality at this time.
Good Fast Cheap
Honey, part of our success is that I don't accept the above. Sooner or later, you will have to compete with someone who believes:
"Good Fast Cheap
we do all three."
When one really knows one's field, it is possible to design simple systems. In the networking world, the qualities simple bring are:
Good (reliable), Fast, Cheap.
In a field where people think that it is perfectly acceptable to run MPLS through a NAT connected via PPP over Ethernet over ATM, it isn't harder to be simpler than the competition.
good luck, fletcher

On Fri, 15 Nov 2002 10:08:22 -0500 David Diaz wrote:
Anyone that calls me honey is in question.
Questioned and questioning, that's me ;-)
It's standard, you cant have everything in life. You attempt to achieve all three however it's all relative. You can have a DSL line now instead of a T1, it's fast and cheap but most arent as good as a T1 and the SLAs arent there right?
"Standard" in this case means common knowledge? Questioning common knowledge can be advantageous. There is no reason that a DSL line can't be more reliable than "most" (as you put it) T1s.
Usually you either build your network to one of two designed: Avg sustained traffic levels, or Peak traffic levels.
Accepted. We go for peak traffic levels and then add a good buffer on top of that just in case.
Avg sustained means that during peak times you might have higher latency, but that you have not over bought capacity... Peak Traffic design means you looked at your max burst levels and bought enough capacity etc to handle that load. The rest of the time you have excess capacity, but your QoS is always great. You will have a higher costs basis here.
Nope. Depends how you design your network. The cost basis of the network is dominated by capital equipment costs (router capacity.) A major factor is also labor. 'Fat pipes' are relatively cheap. I know our costs are lower and quality is higher than our competitors and I believe the reason is that we go for a simple network designed around cheap routers and fat pipes. We made the decision that it was better to design the network for peak capacity (over buy) than deal with the consequences of trying to squeeze out that last 20%. We strive never run to a link at over 60%. Paying for that extra capacity is a lot cheaper than paying for ATM, MPLS, "whatever-happy-BS-they-are-pedaling-this-month", support calls, engineers for configurations, customer churn, etc.
We all strive for all three, I used to almost try a TDM approach. Where I would try and balance business day users, residential users, and backup service users on the network. They used 3 distinct time frames during the day. In this way the network was rarely idle, but each type of users peak time was not in conflict with another.
So if you would like to say you can sell me an OC48 at Aleron's (or fill in the ISPs name) IP backbone quality, at Cogent's pricing of less then $30meg... great...
Just a note: when having an argument, if you find yourself putting words in an opponent's mouth, it is probably a sign of weakness of your argument. We are not in the IP backbone business. However, I like to think that our pricing for consumer broadband is similar disruptive. There are no bottlenecks on our network. We are quite reliable. If you want a $29.95 per month DSL link with no bandwidth caps or similar restrictions, and you can take delivery in our territory, I would be happy to have you as a customer! regards, fletcher

On Fri, 15 Nov 2002 11:20:36 EST, fkittred@gwi.net said:
relatively cheap. I know our costs are lower and quality is higher than our competitors and I believe the reason is that we go for a simple network designed around cheap routers and fat pipes. We made
OK. I'll bite. What do you define as a "cheap" router, and just as important, what counts as a "fat" pipe where you are? You didn't choose the well-known router line from the well-known vendor(*) that handles line-speed packets, as long as you don't even whisper "ingress filtering" within it's hearing, did you? -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech (*) Yes, there's multiple answers to this one.

On Fri, 15 Nov 2002 14:37:08 -0500 Valdis.Kletnieks@vt.edu wrote:
relatively cheap. I know our costs are lower and quality is higher than our competitors and I believe the reason is that we go for a simple network designed around cheap routers and fat pipes. We made
OK. I'll bite. What do you define as a "cheap" router, and just as important, what counts as a "fat" pipe where you are?
Cheap is defined as the undepreciated Ciscos that UUnet threw out when the lyin' backbone engineers sold management the MPLS bill-of-goods in the late nineties. [ Why buy Juniper when you can get second-hand Cisco gear for almost free? ] Fat is 4 OC3s for uplinks at ~$200 per megabit and gigabit for internal at about $40 per fiber mile per month. This is consumer service in Northern New England. At those prices, it is far cheaper to "overbuy" than over-complicate. Naturally, in different geographic areas and different market niches your mileage may vary. Or at least offer an excuse to ignore me.
You didn't choose the well-known router line from the well-known vendor(*) that handles line-speed packets, as long as you don't even whisper "ingress filtering" within it's hearing, did you?
Whispering is not exactly my style. regards, fletcher

Warning , this post won't configure a router. fkittred@gwi.net wrote:
On Thu, 14 Nov 2002 12:36:54 -0500 David Diaz wrote:
People seem to prefer cost of quality at this time. Good Fast Cheap Honey, part of our success is that I don't accept the above. Sooner or later, you will have to compete with someone who believes:
"Good Fast Cheap
we do all three."
Huh, must be in marketing or sales, perhaps a CEO, even. * shrug * Hey, while we are at it, What is the difference between a Suit and an Engineer ? The Engineer -knows- when he is lying. :P Now, the "Honey" comment ? Sounds like a rather sticky "wicket", not my style, I think I'll pass..... However, You -=can=- tell the poster isn't from Baltimore, though... I think I heard once that the Baltimore City slogan is "Welcome Home, Hon!" :D router> Conf t router # silobeth.....shilobeth...seilobath... router # oh, forget it! router # ^Z :\ <morning coffee> I know, Susan... I know. I won't quit my day job. Exiting, stage left. .... ... .. . (No offense intended to anyone, really...) (just lightening up the conversation) (C'Ya)

On Fri, 15 Nov 2002 11:42:46 -0500 Richard Irving wrote:
Huh, must be in marketing or sales, perhaps a CEO, even.
Yup, I am a CEO. I am also (still) one of the most experienced and best educated IP engineers around. It is fun being CEO. Rather than throw stones, you might want to celebrate the fact that being CEO is now on the career path of an engineer.
The Engineer -knows- when he is lying.
Well, I don't know about that. Engineers are without a doubt the most active class of liar I know (but then, I don't get out much.) Some times I feel like it is real hard for engineers to know when they are telling the truth. I do know that it has been a a great professional boon to have an IP engineering background and know when I am having smoke blown at me by other engineers. Dilbert, a cartoon about engineers, is so funny because it is so true. In the final analysis, Dilbert is as much of a weasel as any of the other characters. regards, fletcher

fkittred@gwi.net wrote:
Yup, I am a CEO.
1-900-psy-kick Call now, Mon, we're a waiting for ya!
I am also (still) one of the most experienced and best educated IP engineers around.
And humble, too. :\ [Said to a list where Van Jacobson and Vixie have been known to lurk]
Dilbert, a cartoon about engineers, is so funny because it is so true. In the final analysis, Dilbert is as much of a weasel as any of the other characters.
Admit it, You just like his Tie! :P On a more serious note, FWIW, did you know PacBell was the source of Scott's Inspiration ? He worked on some of the first privately available ISDN lines... and attended some of the earliest public NANOG's. He said after working there for almost 10 years, he decided it was Suicide, or Dilbert. The Rest is History. :D http://www.dilbert.com/comics/dilbert/news_and_history/html/about_scott_adam... .cheers. Hey, I just realized that DogBert and Pres Bush are both short... Coincidence ? http://www.dilbert.com/comics/dilbert/news_and_history/images/origin4.gif
regards, fletcher <lurk mode on> <Susan, put down that keyboard... I am moving on...promise.>

Used to be when it first came out, Wired was a mag the best quality printing on no substance I had ever seen, really seemed like a borderline artist mag. The colors were amazing. I see now, upon looking at a recent issue, their content seems to have improved dramatically. Brian ----- Original Message ----- From: "Pete Kruckenberg" <pete@kruckenberg.com> To: <nanog@merit.edu> Sent: Thursday, November 14, 2002 9:22 AM Subject: Re: PAIX
Wired covered several of these topics in their August issue.
http://www.wired.com/wired/archive/10.08/korea.html
The article points out several subtle, yet fundamental, changes that happen socially and psychologically once the broadband network is available everywhere, to virtually everyone, all the time. We have yet to experience this in the US. I suspect that when it happens, it will be much different than we expect it to be, technically and otherwise.
We still have to remember that for all the hype about the Internet, the killer app is still email and instant messenging. The "killer apps" on Internet2 (video conferencing, digital libraries, media-rich collaboration), which give some indication of what the future killer app will be, seem to be equally mundane (but exciting at the same time).
Pete.
On Thu, 14 Nov 2002 fkittred@gwi.net wrote:
On Thu, 14 Nov 2002 10:22:09 -0500 David Diaz wrote:
2) There is a lack of a killer app requiring peering every 100 sq Km.
I recommend some quality time with journals covering South Korea, broadband, online gaming and video rental.

DD> Date: Thu, 14 Nov 2002 10:22:09 -0500 DD> From: David Diaz DD> 1) Long haul circuits are dirt cheap. Meaning distance DD> peering becomes more attractive. L3 also has an MPLS product DD> so you pay by the meg. I am surprised a great many peers are DD> using this. But apparently CFOs love it Uebercheap longhaul would _favor_ the construction of local exchanges. Let's say I pay $100k/mo port and $10M/mo loop... obviously, I need to cut loop cost. If an exchange brings zero-mile loops to the table, that should reduce loop cost. Anyone serious will want a good selection of providers, and the facility offering the most choices should be sitting pretty. Likewise, I agree that expensive longhaul would favor increased local peering... but, if local loop were extremely cheap, would an exchange be needed? It would not be inappropriate for all parties to congregate at an exchange, but I'd personally rather run N dirt-cheap loops across town from my private facility. Hence I refer to an "imbalance" in loop/longhaul pricing; a large proliferation in exchanges could be precipitated by _either_ loop _or_ longhaul being "expensive"... and it seems expensive loop would be a more effective driver for local exchanges. DD> 2) There is a lack of a killer app requiring peering every DD> 100 sq Km. VoIP might be the app. Seems to be gaining a <minirant> By the time IP packets are compressed and QOSed enough to support voice, one essentially reinvents ATM or FR (with ATM seeming suspiciously like FR with fixed-length cells)... </minirant> DD> great deal of traction. Since it's obvious traffic levels DD> would sky rockets, and latency is a large concern, and there DD> is a need to connect to the local voice TDM infrastructure, Yes, although cost would trump latency. Once latency is "good enough", cost rules. Would I pay a premium to reduce latency from 50ms to 10ms for voice calls? No. DD> local exchanging is preferred. However, many VoIP companies DD> claim latency right now is acceptable and they are receiving DD> no major complaints. So we are left to guess at other killer DD> apps, video conferencing, movie industry sending movies DD> online directly to consumers etc. The above are "big bandwidth" applications. However, they do not inherently require exchanges... _local_ videoconferencing, yes. Local security companies monitoring cameras around town, yes. Video or newscasting, yes. Distributed content, yes. (If a traffic sink could pull 80% of its traffic from a local building where cross-connects are reasonably priced...) DD> 3) In order to get to the next level of peering exchanges... [ snip ] DD> Perhaps it's up to the key exchange companies to tie fabrics DD> together allowing new (tier2 locations) to gain visibility to DD> peers at other larger locations. This would allow peers at DD> the larger locations to engage in peering discussions, or DD> turn ups, and when traffic levels are justified a deployment DD> to the second location begins. Problem with new locations DD> are 'chicken and the egg.' Critical mass must be achieved DD> before there is a large value proposition for peers. Yes. Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature. These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.

At 18:31 +0000 11/14/02, E.B. Dreger wrote:
DD> Date: Thu, 14 Nov 2002 10:22:09 -0500 DD> From: David Diaz
DD> 1) Long haul circuits are dirt cheap. Meaning distance DD> peering becomes more attractive. L3 also has an MPLS product DD> so you pay by the meg. I am surprised a great many peers are DD> using this. But apparently CFOs love it
Uebercheap longhaul would _favor_ the construction of local exchanges.
Let's say I pay $100k/mo port and $10M/mo loop... obviously, I need to cut loop cost. If an exchange brings zero-mile loops to the table, that should reduce loop cost. Anyone serious will want a good selection of providers, and the facility offering the most choices should be sitting pretty.
This is an interesting and good point, but any carrier hotel provides the same thing.
Likewise, I agree that expensive longhaul would favor increased local peering... but, if local loop were extremely cheap, would an exchange be needed? It would not be inappropriate for all parties to congregate at an exchange, but I'd personally rather run N dirt-cheap loops across town from my private facility.
Hence I refer to an "imbalance" in loop/longhaul pricing; a large proliferation in exchanges could be precipitated by _either_ loop _or_ longhaul being "expensive"... and it seems expensive loop would be a more effective driver for local exchanges.
Tried this. Yes you are right, problem is local loops are sometimes extremely difficult to get delivered in a timely manner and upgrading them can be an internal battle with the CFO. To solve this, we deployed the Bellsouth mix. I actually came up with the idea while have a terrible time getting private peering sessions up while at Netrail. 6months was a ridiculous timeframe. Bellsouth liked it and deployed it, eventually. So now you have a distributed optical exchange where you can point and click and drop circuits btw any of the nodes; nodes were located at many colos and undersea fiber drops. Theoretically this meant the exchange was "colo" neutral. With flat rate loops it meant location wasnt important. Each node also allows for hairpinning so you could do peering within the room at a reduced rate (since u werent burning any ring-side capacity). The neat part was that customers would be able to see and provision their own capacity via a login and pwd. Also, with UNI 1.0, the IP layer would be able to upgrade capacity on the fly. No one has put that into production but real world test have worked. In reality a more realistic scenario was the ability of a customer to upgrade from an OC3 to an OC12. The ports were the same so it was just a setting on the NMS to change. It was a nice feature and meant engineers did now have to justify ESP feelings on how traffic would grow to a grouchy CFO.
DD> 2) There is a lack of a killer app requiring peering every DD> 100 sq Km. VoIP might be the app. Seems to be gaining a
<minirant> By the time IP packets are compressed and QOSed enough to support voice, one essentially reinvents ATM or FR (with ATM seeming suspiciously like FR with fixed-length cells)... </minirant>
DD> great deal of traction. Since it's obvious traffic levels DD> would sky rockets, and latency is a large concern, and there DD> is a need to connect to the local voice TDM infrastructure,
Yes, although cost would trump latency. Once latency is "good enough", cost rules. Would I pay a premium to reduce latency from 50ms to 10ms for voice calls? No.
I agree. A couple off-list emails to me did not seem to understand this. Just because we post something does not mean it's our personal pref, it's just we are posting the reality of what will likely happen in our opinion. If there is not a competitive advantage, backed up by reduced cost or increased revenue, it would be a detriment to deploy it... more likely a CFO would shoot it down. Someone sent an example as if I am making the statement no one needs more then 640k of ram on their computer. Never made that analogy, but there is a limit. It also seems to me that shared supercomputer time is making a come back. IBM seems to be pushing in that direction, and there are several grid networks being setup. The world changes. Let's face it. Has anyone talked about the protocols to run these super networks? Where we have something like 100-400 peering nodes domestically? Injecting those routes into our IGP? Talk about a complex design... now we need to talk about tricks to prevent the overflow of our route tables internally... ok I can here people getting ready to post stuff about reflectors etc. Truth is, it's just plain difficult to hit critical mass at a new exchange point. No one wishes to be 1st since there is little return. Perhaps these exch operators need to prime the pump by offered tiered rates, the 1st 1/3 of peers deploying coming in at a permanent 50% discount.
DD> local exchanging is preferred. However, many VoIP companies DD> claim latency right now is acceptable and they are receiving DD> no major complaints. So we are left to guess at other killer DD> apps, video conferencing, movie industry sending movies DD> online directly to consumers etc.
The above are "big bandwidth" applications. However, they do not inherently require exchanges... _local_ videoconferencing, yes. Local security companies monitoring cameras around town, yes. Video or newscasting, yes. Distributed content, yes. (If a traffic sink could pull 80% of its traffic from a local building where cross-connects are reasonably priced...)
DD> 3) In order to get to the next level of peering exchanges...
[ snip ]
DD> Perhaps it's up to the key exchange companies to tie fabrics DD> together allowing new (tier2 locations) to gain visibility to DD> peers at other larger locations. This would allow peers at DD> the larger locations to engage in peering discussions, or DD> turn ups, and when traffic levels are justified a deployment DD> to the second location begins. Problem with new locations DD> are 'chicken and the egg.' Critical mass must be achieved DD> before there is a large value proposition for peers.
Yes.
Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature.
These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.

Thus spake "E.B. Dreger" <eddy+public+spam@noc.everquick.net>
DD> 1) Long haul circuits are dirt cheap. Meaning distance DD> peering becomes more attractive. L3 also has an MPLS product DD> so you pay by the meg. I am surprised a great many peers are DD> using this. But apparently CFOs love it
Uebercheap longhaul would _favor_ the construction of local exchanges.
Incorrect. Cheap longhaul favors a few centralized exchanges. If there is no economic value in keeping traffic local, it is in carriers' interests to minimize the number of peering points.
Let's say I pay $100k/mo port and $10M/mo loop... obviously, I need to cut loop cost. If an exchange brings zero-mile loops to the table, that should reduce loop cost. Anyone serious will want a good selection of providers, and the facility offering the most choices should be sitting pretty.
Most vendor-neutral colos have cheap zero-mile loops.
Likewise, I agree that expensive longhaul would favor increased local peering... but, if local loop were extremely cheap, would an exchange be needed? It would not be inappropriate for all parties to congregate at an exchange, but I'd personally rather run N dirt-cheap loops across town from my private facility.
What is the cost of running N loops across town, vs. the cost of pushing that traffic to a remote peering location and back? Be sure to include equipment, maintenance, and administrative costs, not just circuits.
The above are "big bandwidth" applications. However, they do not inherently require exchanges... _local_ videoconferencing, yes. Local security companies monitoring cameras around town, yes. Video or newscasting, yes.
None of these applications require local exchanges. There is a slight increase in end-to-end latency when you must use a remote exchange, but very few applications care about absolute latency -- they only care about bandwidth and jitter.
Distributed content, yes. (If a traffic sink could pull 80% of its traffic from a local building where cross-connects are reasonably priced...)
Distributed content assumes the source is topologically close to the sink. The most cost-efficient way to do this is put sources at high fan-out areas, as this gets them the lowest _average_ distance to their sinks. This doesn't necessarily mean that putting a CNN mirror in 100,000 local exchanges is going to reduce CNN's costs. S

From a source's point, distribution makes sense when cost of geographically-diverse server presence (incremental admin/hw, content distribution) is less than the cost of serving everything from a centralized point. Once that happens... if a substantial
SS> Date: Thu, 14 Nov 2002 13:32:55 -0600 SS> From: Stephen Sprunk SS> Incorrect. Cheap longhaul favors a few centralized SS> exchanges. If there is no economic value in keeping traffic SS> local, it is in carriers' interests to minimize the number of SS> peering points. True. However, cheap longhaul / expensive local means providers _will_ try to reduce loop costs, favoring "carrier hotels". SS> Most vendor-neutral colos have cheap zero-mile loops. Correct. In my original post... are we discussing #1 or #2? It seems as if #2. Where are we drawing the line between "carrier hotel" and "exchange"? I believe Paul was being perhaps more nebulous than today's definition of "exchange" when he referenced 1500 sq-ft in-bottom-of-bank-building facilities. SS> What is the cost of running N loops across town, vs. the cost SS> of pushing that traffic to a remote peering location and SS> back? Be sure to include equipment, maintenance, and SS> administrative costs, not just circuits. "It depends." SS> None of these applications require local exchanges. There is SS> a slight increase in end-to-end latency when you must use a SS> remote exchange, but very few applications care about SS> absolute latency -- they only care about bandwidth and SS> jitter. With bounded latency and "acceptable" typical throughput, one seeks to minimize jitter and cost. Jitter is caused by variable queue time, which is due to buffering, which is a side-effect of statmuxed traffic w/o strict { realtime delivery constraints | QoS | TDM-ish architecture }... yes. And N^2 makes full-mesh irresponsible when attempting to maximize bandwidth... yes. (I think buying full transit from 10 providers is well beyond the point of diminishing return; no offense to INAP.) Again... if loop is expensive, and providers are concentrated in "carrier hotels" with reasonably-priced xconns... when does it become an "exchange"? Note that some exchanges do not provide a switch fabric, but rather run xconns. Sure, one must factor in all the costs. The breakeven point varies, if it exists at all. SS> Distributed content assumes the source is topologically close SS> to the sink. The most cost-efficient way to do this is put SS> sources at high fan-out areas, as this gets them the lowest SS> _average_ distance to their sinks. This doesn't necessarily SS> mean that putting a CNN mirror in 100,000 local exchanges is SS> going to reduce CNN's costs. It depends. Akamai certainly is overkill for smaller sites, and perhaps not cost-effective for others. However, high fan-out can be a _bad_ thing too: Assuming one has substantial traffic flow to various regions, why source everything from NYC? Why not replicate in London, AMS, SJO, IAD, CHI, DFW, LAX, SEA, KSCY? portion of Internet traffic were sourced from one local point, sinks would gravitate toward said point. Of course, I may well be stuck in distance-sensitive mode. If local loop is the primary expense... we're back to what you said about "few, centralized exchanges" and "many carrier hotels"? So, where's the dividing line? Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature. These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.

On Thu, 14 Nov 2002, David Diaz wrote:
2) There is a lack of a killer app requiring peering every 100 sq Km.
Peering every 100 sq km is absolutely infeasible. Just think of the number of alternative paths routing algorithms wil lhave to consider. Anything like that would require serious redesign of Internet's routing architecture. --vadim

## On 2002-11-14 14:44 -0800 Vadim Antonov typed: VA> VA> VA> On Thu, 14 Nov 2002, David Diaz wrote: VA> VA> > 2) There is a lack of a killer app requiring peering every 100 sq Km. VA> VA> Peering every 100 sq km is absolutely infeasible. Just think of the VA> number of alternative paths routing algorithms wil lhave to consider. VA> VA> Anything like that would require serious redesign of Internet's routing VA> architecture. What about: IPv6 with hierarchial(sp?) geographical allocation ? BGP with some kind of tag limiting it to <N> AS hops ? ( say N=2 or N=3? ) VA> VA> --vadim VA> VA> -- Rafi

At 1:20 +0200 11/15/02, Rafi Sadowsky wrote:
## On 2002-11-14 14:44 -0800 Vadim Antonov typed:
VA> VA> VA> On Thu, 14 Nov 2002, David Diaz wrote: VA> VA> > 2) There is a lack of a killer app requiring peering every 100 sq Km. VA> VA> Peering every 100 sq km is absolutely infeasible. Just think of the VA> number of alternative paths routing algorithms wil lhave to consider. VA> VA> Anything like that would require serious redesign of Internet's routing VA> architecture.
What about:
IPv6 with hierarchial(sp?) geographical allocation ?
BGP with some kind of tag limiting it to <N> AS hops ? ( say N=2 or N=3? )
Hope count wont work. You would see the same hop count at all your peering locations. How your traffic exited would depend on your IGP decision tree. Do we want to get into exporting meds or tags? And with >100 domestic peering points how would you manage that? Vadim is correct, it would take a whole new protocol and that is unlikely. Proof of that is IPv6. IPv4 is obviously still the big winner. Doesnt this model sound a bit like internap to anyone? Why even have a backbone if you have peering in every location.
VA> VA> --vadim VA> VA>
-- Rafi
-- David Diaz dave@smoton.net [Email] pagedave@smoton.net [Pager] Smotons (Smart Photons) trump dumb photons

On Fri, 15 Nov 2002, Rafi Sadowsky wrote:
VA> > 2) There is a lack of a killer app requiring peering every 100 sq Km. VA> VA> Peering every 100 sq km is absolutely infeasible. Just think of the VA> number of alternative paths routing algorithms wil lhave to consider. VA> VA> Anything like that would require serious redesign of Internet's routing VA> architecture.
What about:
IPv6 with hierarchial(sp?) geographical allocation ?
BGP with some kind of tag limiting it to <N> AS hops ? ( say N=2 or N=3? )
I can think of several ways to do it, but all of them amount to significant change from how things are being done in the current generation of backbones. --vadim

Voice of reason... The only possible reason I can think of is if these data networks replace the present voice infrastructure. Think about it, if we really all do replace our phones with some video screen like in the movies, then yes, most of those calls stay local within the cities. Mom calling son etc etc So we can think of these "peering centers" as replacements for the 5-10 COs in most average cities. Otherwise what apps require such dense peering. At 14:44 -0800 11/14/02, Vadim Antonov wrote:
On Thu, 14 Nov 2002, David Diaz wrote:
2) There is a lack of a killer app requiring peering every 100 sq Km.
Peering every 100 sq km is absolutely infeasible. Just think of the number of alternative paths routing algorithms wil lhave to consider.
Anything like that would require serious redesign of Internet's routing architecture.
--vadim
participants (14)
-
Alex Rubenstein
-
Brian
-
David Diaz
-
E.B. Dreger
-
fkittred@gwi.net
-
Michael C. Wu
-
Paul Vixie
-
Pete Kruckenberg
-
Petri Helenius
-
Rafi Sadowsky
-
Richard Irving
-
Stephen Sprunk
-
Vadim Antonov
-
Valdis.Kletnieks@vt.edu