
At 18:31 +0000 11/14/02, E.B. Dreger wrote:
DD> Date: Thu, 14 Nov 2002 10:22:09 -0500 DD> From: David Diaz
DD> 1) Long haul circuits are dirt cheap. Meaning distance DD> peering becomes more attractive. L3 also has an MPLS product DD> so you pay by the meg. I am surprised a great many peers are DD> using this. But apparently CFOs love it
Uebercheap longhaul would _favor_ the construction of local exchanges.
Let's say I pay $100k/mo port and $10M/mo loop... obviously, I need to cut loop cost. If an exchange brings zero-mile loops to the table, that should reduce loop cost. Anyone serious will want a good selection of providers, and the facility offering the most choices should be sitting pretty.
This is an interesting and good point, but any carrier hotel provides the same thing.
Likewise, I agree that expensive longhaul would favor increased local peering... but, if local loop were extremely cheap, would an exchange be needed? It would not be inappropriate for all parties to congregate at an exchange, but I'd personally rather run N dirt-cheap loops across town from my private facility.
Hence I refer to an "imbalance" in loop/longhaul pricing; a large proliferation in exchanges could be precipitated by _either_ loop _or_ longhaul being "expensive"... and it seems expensive loop would be a more effective driver for local exchanges.
Tried this. Yes you are right, problem is local loops are sometimes extremely difficult to get delivered in a timely manner and upgrading them can be an internal battle with the CFO. To solve this, we deployed the Bellsouth mix. I actually came up with the idea while have a terrible time getting private peering sessions up while at Netrail. 6months was a ridiculous timeframe. Bellsouth liked it and deployed it, eventually. So now you have a distributed optical exchange where you can point and click and drop circuits btw any of the nodes; nodes were located at many colos and undersea fiber drops. Theoretically this meant the exchange was "colo" neutral. With flat rate loops it meant location wasnt important. Each node also allows for hairpinning so you could do peering within the room at a reduced rate (since u werent burning any ring-side capacity). The neat part was that customers would be able to see and provision their own capacity via a login and pwd. Also, with UNI 1.0, the IP layer would be able to upgrade capacity on the fly. No one has put that into production but real world test have worked. In reality a more realistic scenario was the ability of a customer to upgrade from an OC3 to an OC12. The ports were the same so it was just a setting on the NMS to change. It was a nice feature and meant engineers did now have to justify ESP feelings on how traffic would grow to a grouchy CFO.
DD> 2) There is a lack of a killer app requiring peering every DD> 100 sq Km. VoIP might be the app. Seems to be gaining a
<minirant> By the time IP packets are compressed and QOSed enough to support voice, one essentially reinvents ATM or FR (with ATM seeming suspiciously like FR with fixed-length cells)... </minirant>
DD> great deal of traction. Since it's obvious traffic levels DD> would sky rockets, and latency is a large concern, and there DD> is a need to connect to the local voice TDM infrastructure,
Yes, although cost would trump latency. Once latency is "good enough", cost rules. Would I pay a premium to reduce latency from 50ms to 10ms for voice calls? No.
I agree. A couple off-list emails to me did not seem to understand this. Just because we post something does not mean it's our personal pref, it's just we are posting the reality of what will likely happen in our opinion. If there is not a competitive advantage, backed up by reduced cost or increased revenue, it would be a detriment to deploy it... more likely a CFO would shoot it down. Someone sent an example as if I am making the statement no one needs more then 640k of ram on their computer. Never made that analogy, but there is a limit. It also seems to me that shared supercomputer time is making a come back. IBM seems to be pushing in that direction, and there are several grid networks being setup. The world changes. Let's face it. Has anyone talked about the protocols to run these super networks? Where we have something like 100-400 peering nodes domestically? Injecting those routes into our IGP? Talk about a complex design... now we need to talk about tricks to prevent the overflow of our route tables internally... ok I can here people getting ready to post stuff about reflectors etc. Truth is, it's just plain difficult to hit critical mass at a new exchange point. No one wishes to be 1st since there is little return. Perhaps these exch operators need to prime the pump by offered tiered rates, the 1st 1/3 of peers deploying coming in at a permanent 50% discount.
DD> local exchanging is preferred. However, many VoIP companies DD> claim latency right now is acceptable and they are receiving DD> no major complaints. So we are left to guess at other killer DD> apps, video conferencing, movie industry sending movies DD> online directly to consumers etc.
The above are "big bandwidth" applications. However, they do not inherently require exchanges... _local_ videoconferencing, yes. Local security companies monitoring cameras around town, yes. Video or newscasting, yes. Distributed content, yes. (If a traffic sink could pull 80% of its traffic from a local building where cross-connects are reasonably priced...)
DD> 3) In order to get to the next level of peering exchanges...
[ snip ]
DD> Perhaps it's up to the key exchange companies to tie fabrics DD> together allowing new (tier2 locations) to gain visibility to DD> peers at other larger locations. This would allow peers at DD> the larger locations to engage in peering discussions, or DD> turn ups, and when traffic levels are justified a deployment DD> to the second location begins. Problem with new locations DD> are 'chicken and the egg.' Critical mass must be achieved DD> before there is a large value proposition for peers.
Yes.
Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature.
These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.