RE: Current street prices for US Internet Transit
Thanks to all who replied with data, and yes, the pricing was all 95th percentile. Wow - the U.S. has an amazingly unhealthy and cut throat transit market in 2004. About 20 folks responded, most saying the Peering Coordinator quotes (below) sounded about right.
ISP Transit Commits and Prices
---------------------------------------------------------------------------------------------------------------------
if you commit to 1M per month you will pay about $125/Mbps if you commit to 10M per month you will pay about $ 60/Mbps if you commit to 100M per month you will pay about $45/Mbps if you commit to 1000M per month you will pay about $30/Mbps
A couple people said these prices were TOO HIGH, particularly for the gig commit, although several multi-gig commits came in tiered; for example, $45/Mbps for 1G commit, $35 for 2G, etc. on down to $21 for 8G commit. (One Tier 2 ISP said that they sold 1G commit as low as $18/Mbps, presumably simply reselling Tier 1 BW so the difference may be negligible.) Three said that these transit prices were TOO LOW, in one case they paid about double these numbers. It was interesting that these three were a content company, a cable company and a DSL company, folks who traditionally don't sell transit. Maybe they are in a retail market for transit, while everyone else buys in the wholesale market. Since so many said these prices are about right, I'll use them for the Peering versus Transit analysis. A couple people pointed to the 10M commit being closer to $80/Mbps, so that may be an adjustment. Given the adjustment, I thought you might be interested in how the U.S. transit prices compare against a handful of other Peering Ecosystems: The Cost of Internet Transit in Commit AU SG JP HK USA 1 Mbps $720 $625 $490 $185 $125 10 Mbps $410 $350 $150 $100 $80 100 Mbps $325 $210 $110 $80 $45 1000 Mbps $305 $115 $50 $50 $30 Round numbers anyway FWIW. Hope this helps. I feel bad for those selling transit these days - at these prices, margins must be mighty thin, and I suspect we will see some more turbulence in the industry. Bill
To give you guys additionnal information, prices in France (probably in all Western Europe) are quite similar to those currently in the USA. One can also notice that peering costs are also cheaper in Europe than in North America. (monthly fees are around 1000-1300$ in most European IX's) I unfortunately share Bill's feelings about more turbulences to come. And this applies not only to the IP transit field, but rather to the whole telecom industry. Rough days that we live.. Fred ___________________________________ Frederic NGUYEN Engineering Manager T-Online / Club Internet fnguyen@t-online.fr ----- Message d'origine ----- De : "William B. Norton" <wbn@equinix.com> À : "Michel Py" <michel@arneill-py.sacramento.ca.us>; "William B. Norton" <wbn@equinix.com>; <nanog@merit.edu> Envoyé : lundi 16 août 2004 19:16 Objet : RE: Current street prices for US Internet Transit Thanks to all who replied with data, and yes, the pricing was all 95th percentile. Wow - the U.S. has an amazingly unhealthy and cut throat transit market in 2004. About 20 folks responded, most saying the Peering Coordinator quotes (below) sounded about right.
ISP Transit Commits and Prices
---------------------------------------------------------------------------------------------------------------------
if you commit to 1M per month you will pay about $125/Mbps if you commit to 10M per month you will pay about $ 60/Mbps if you commit to 100M per month you will pay about $45/Mbps if you commit to 1000M per month you will pay about $30/Mbps
A couple people said these prices were TOO HIGH, particularly for the gig commit, although several multi-gig commits came in tiered; for example, $45/Mbps for 1G commit, $35 for 2G, etc. on down to $21 for 8G commit. (One Tier 2 ISP said that they sold 1G commit as low as $18/Mbps, presumably simply reselling Tier 1 BW so the difference may be negligible.) Three said that these transit prices were TOO LOW, in one case they paid about double these numbers. It was interesting that these three were a content company, a cable company and a DSL company, folks who traditionally don't sell transit. Maybe they are in a retail market for transit, while everyone else buys in the wholesale market. Since so many said these prices are about right, I'll use them for the Peering versus Transit analysis. A couple people pointed to the 10M commit being closer to $80/Mbps, so that may be an adjustment. Given the adjustment, I thought you might be interested in how the U.S. transit prices compare against a handful of other Peering Ecosystems: The Cost of Internet Transit in. Commit AU SG JP HK USA 1 Mbps $720 $625 $490 $185 $125 10 Mbps $410 $350 $150 $100 $80 100 Mbps $325 $210 $110 $80 $45 1000 Mbps $305 $115 $50 $50 $30 Round numbers anyway FWIW. Hope this helps. I feel bad for those selling transit these days - at these prices, margins must be mighty thin, and I suspect we will see some more turbulence in the industry. Bill
On Aug 16, 2004, at 1:16 PM, William B. Norton wrote:
Thanks to all who replied with data, and yes, the pricing was all 95th percentile.
Wow - the U.S. has an amazingly unhealthy and cut throat transit market in 2004.
Mind if I ask why you think it is "unhealthy"? I suppose an argument could be made that this is "below cost", but since you are not a provider and do not sell transit, I would hope the people doing so know their costs and margins better than you do. Unfortunately, I doubt any transit provider offering these prices will tell us if they are below cost. (Someone care to prove me wrong? :-) But since this is not 1999, I'm guessing at least SOME of them are profitable, and therefore the costs are not necessarily unhealthy. So perhaps you should be more careful of your characterization?
A couple people said these prices were TOO HIGH, particularly for the gig commit, although several multi-gig commits came in tiered; for example, $45/Mbps for 1G commit, $35 for 2G, etc. on down to $21 for 8G commit. (One Tier 2 ISP said that they sold 1G commit as low as $18/Mbps, presumably simply reselling Tier 1 BW so the difference may be negligible.)
Having been a "Tier 2" (several, actually :), I can tell you that it is not "simply reselling Tier 1 BW" - which you should know, providing a service to allow Tier 2s to do more than resell transit from a bigger network....
Given the adjustment, I thought you might be interested in how the U.S. transit prices compare against a handful of other Peering Ecosystems:
The Cost of Internet Transit in… Commit AU SG JP HK USA 1 Mbps $720 $625 $490 $185 $125 10 Mbps $410 $350 $150 $100 $80 100 Mbps $325 $210 $110 $80 $45 1000 Mbps $305 $115 $50 $50 $30
Round numbers anyway FWIW. Hope this helps. I feel bad for those selling transit these days - at these prices, margins must be mighty thin, and I suspect we will see some more turbulence in the industry.
Those are apples & oranges. You cannot compare bandwidth in countries without the same fiber infrastructure as the US ( and with government owned PTTs controlling almost all access to the US market. Not to mention other differences which just don't translate. I notice that you do not list a single EU country. Prices there are much closer to the US. Anyway, I suspect "more turbulence in the industry" for the next few millennia, no matter where prices are. :-) -- TTFN, patrick
On Mon, 16 Aug 2004, Patrick W Gilmore wrote:
Unfortunately, I doubt any transit provider offering these prices will tell us if they are below cost. (Someone care to prove me wrong? :-)
Cisco 12400 OC192 cards are $225k listprice. You want to build a triangle with redundancy, ie 6 12400, 12 OC192 cards, and you want to write this off in three years. You have a good discount at 50% and you're a good provider who is sincere about redundancy and only load your links 50%. This means you've forked out approx $1.4M in linecards, you can load these at 50% ie 10gigabit/s of revenue-generating traffic (at best), that means approx $4 per megabit per month in just linecard costs to haul this between your three metro areas. No customer facing interfaces, no interconnect to other ISPs etc. I estimate that the router+LC cost for any GSR/juniper based network is $10/per megabit at least. Now, you probably need to get yourself a DWDM system or some DWDM capacity to run this network over, and you probably want to hire some qualified people to run it. Selling 10gig of internet at $30/megabit gives you a total revenue of $3.6M per year. I have a hard time to see the business case in this at current prices. Time to go back to the drawing board and find another way of doing this? Would you pay $10 more per megabit to buy this capacity from someone using 12000 than from someone using let's say 7600 routers? That's something people will have to start to figure out the way we're headed here. -- Mikael Abrahamsson email: swmike@swm.pp.se
I have a hard time to see the business case in this at current prices.
Time to go back to the drawing board and find another way of doing this? Yes, stop paying retail.
-- Alex Pilosov | DSL, Colocation, Hosting Services President | alex@pilosoft.com (800) 710-7031 Pilosoft, Inc. | http://www.pilosoft.com
On Aug 16, 2004, at 2:48 PM, Mikael Abrahamsson wrote:
On Mon, 16 Aug 2004, Patrick W Gilmore wrote:
Unfortunately, I doubt any transit provider offering these prices will tell us if they are below cost. (Someone care to prove me wrong? :-)
Cisco 12400 OC192 cards are $225k listprice. You want to build a triangle with redundancy, ie 6 12400, 12 OC192 cards, and you want to write this off in three years. You have a good discount at 50% and you're a good provider who is sincere about redundancy and only load your links 50%.
This means you've forked out approx $1.4M in linecards, you can load these at 50% ie 10gigabit/s of revenue-generating traffic (at best), that means approx $4 per megabit per month in just linecard costs to haul this between your three metro areas. No customer facing interfaces, no interconnect to other ISPs etc. I estimate that the router+LC cost for any GSR/juniper based network is $10/per megabit at least.
Now, you probably need to get yourself a DWDM system or some DWDM capacity to run this network over, and you probably want to hire some qualified people to run it. Selling 10gig of internet at $30/megabit gives you a total revenue of $3.6M per year.
I have a hard time to see the business case in this at current prices.
Interesting analysis, but I really can't believe that 10 gigabits of connectivity costs $30/Mbps. There is not a network in the US or Europe who will not sell you a 10 gig commit for far less than $30/Mbps, and I honestly do not believe that every network is selling below cost. Perhaps some of your assumptions are wrong. Perhaps people are making due with OC48s. Perhaps there is less redundancy or more loading. Perhaps your discount level is too low. Who knows? Did you build an OC192 network with 6 routers and 3 links and etc.? I didn't, so maybe I'm wrong. But given the choices of A) Every single network on at least two continents is selling for less than half their cost or B) An one page e-mail to NANOG may not reflect the complex business realities of the telecommunications world - well, I'll pick B. But that's me. :)
Would you pay $10 more per megabit to buy this capacity from someone using 12000 than from someone using let's say 7600 routers? That's something people will have to start to figure out the way we're headed here.
What do you care which routers they use? I've seen networks buy the most expensive routers and run a crappy network, and I've seen people run stable networks on the cheap. I just want my bits to flow quickly and reliably. I don't really care if you do it on Juniper, Force10, cisco, or tin-cans-and-string. Why do you care? -- TTFN, patrick
On Mon, 16 Aug 2004, Patrick W Gilmore wrote:
What do you care which routers they use? I've seen networks buy the most expensive routers and run a crappy network, and I've seen people run stable networks on the cheap. I just want my bits to flow quickly and reliably. I don't really care if you do it on Juniper, Force10, cisco, or tin-cans-and-string.
Well, with the GSR (and alike) you're paying for high MTBF, large buffers and quick re-routing when something happens, so yes, this is a quality issue and that's why you should care and make an informed decision. Though I do agree with you that current internet connectivity is becoming more of a bulk packet forwarding service with questionable SLAs, and that's what people are willing to pay for, not the premium service. Good enough, I think it's called. -- Mikael Abrahamsson email: swmike@swm.pp.se
Mikael Abrahamsson:
Well, with the GSR (and alike) you're paying for high MTBF, large buffers and quick re-routing when something happens, so yes, this is a quality issue and that's why you should care and make an informed decision.
For some of us "large buffers" is exactly what we don't want. Actually, for most of us, but most people haven't figured that out yet. Matthew Kaufman matthew@eeph.com
Mikael Abrahamsson:
Well, with the GSR (and alike) you're paying for high MTBF, large buffers and quick re-routing when something happens, so yes, this is a quality issue and that's why you should care and make an informed decision.
For some of us "large buffers" is exactly what we don't want. Actually, for most of us, but most people haven't figured that out yet. Matthew Kaufman matthew@eeph.com
On Aug 16, 2004, at 3:15 PM, Mikael Abrahamsson wrote:
On Mon, 16 Aug 2004, Patrick W Gilmore wrote:
What do you care which routers they use? I've seen networks buy the most expensive routers and run a crappy network, and I've seen people run stable networks on the cheap. I just want my bits to flow quickly and reliably. I don't really care if you do it on Juniper, Force10, cisco, or tin-cans-and-string.
Well, with the GSR (and alike) you're paying for high MTBF, large buffers and quick re-routing when something happens, so yes, this is a quality issue and that's why you should care and make an informed decision.
I submit that the equipment in the network is far, far less important than the people running the equipment. I repeat: "I've seen networks buy the most expensive routers and run a crappy network, and I've seen people run stable networks on the cheap." I do not care what equipment the network uses, as long as my packets get to their destination reliably and quickly. This may or may not place restrictions on the equipment to be used (can you get my packets there "reliably and quickly" on tin-cans-and-string?), and it almost certainly places restrictions on who runs that equipment, but those are the provider's problem, not mine. -- TTFN, patrick
Patrick - The other thing that I found interesting when factoring in the equipment costs into the cost of Peering, was that the used equipment market remains vibrant. From my conversations with folks in the Peering Coordinator Community, round numbers here, one can pick up a used 7500 series router equipment now for about $9K ! The configuration was with an OC-3, and FastE for peering, for about 25% of the new cost. Pretty amazing - from my research I cited one guy who built his whole test lab from used 7500's. On eBay folks can get used Juniper M20s as well, some still shrink wrapped. Caveat: When I walked the Cisco and Juniper contacts through the research paper ("Do ATM-based Internet Exchanges Make Sense Anymore?") they pointed to the software license as being non-transferable, and therefore requiring a new license from the vendor to be legitimate. There is also a re-certification process if you want to get the gear under service contract. There are some used equipment vendors that claim to take care of these issues for you. So, used equipment is one way that some are deploying low cost networks, and yes, the packets get there. If their negotiating is as strong as their scrounging, they may be able to compete in today's market. Bill
On Mon, 16 Aug 2004, William B. Norton wrote:
So, used equipment is one way that some are deploying low cost networks, and yes, the packets get there. If their negotiating is as strong as their scrounging, they may be able to compete in today's market.
Used engine 2 OC48 cards for the Cisco 12000 is up from $3500 in 2001 to approx $15-20k now. Yes, there are hardware vendors out there that are not favored by a lot of people which you can have very cheaply, but the mostly coveted Cisco parts are quite expensive on that market nowadays. OC3 and FE is not very interesting to most people, reflected in the price of a 16port OC3 (engine 2) card for the GSR, listprice $165k can be had at approx $3k used. I know hardware resellers that have had such in stock for over a year without being able to sell them. If you're a small time ISP just starting up and you have plenty of space, cooling and electricity, the backbone routers of 5-7 years ago can be had quite cheaply, yes. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Mon, Aug 16, 2004 at 01:27:22PM -0700, William B. Norton wrote:
From my conversations with folks in the Peering Coordinator Community, round numbers here, one can pick up a used 7500 series router equipment now for about $9K ! The configuration was with an OC-3, and FastE for peering, for about 25% of the new cost.
In Europe, for a lot less. Configurations like: Cisco 7513 + 2xPSU-AC + 2xRSP4 + POSIP-OC3-40SM + VIP2-50 + PA-FE went multiple times for 3300-3850 EUR on Ebay Germany.
Caveat: When I walked the Cisco and Juniper contacts through the research paper ("Do ATM-based Internet Exchanges Make Sense Anymore?") they pointed to the software license as being non-transferable, and therefore requiring a new license from the vendor to be legitimate.
Yes, this is standard tactics. Fortunately, in Germany this is void as the vendor can't make software license for specific hardware non-transferrable, by law.
There is also a re-certification process if you want to get the gear under service contract.
Yep, this is the tactic they use to catch "the rest". Sources told me that the re-cert fee makes buying used gear almost totally uninteresting. The fact that e.g. Juniper gear doesn't sell at all on Ebay is a good indicator for success of this tactic. With Cisco it's a different story... people get around service contracts by just buying enough spares on the used market... it's cheap enough. With Juniper it's more difficult as the offerings aren't as broad and cheap.
There are some used equipment vendors that claim to take care of these issues for you.
Interesting. I wonder how. :-P Best regards, Daniel
On Mon, 16 Aug 2004, William B. Norton wrote:
Caveat: When I walked the Cisco and Juniper contacts through the research paper ("Do ATM-based Internet Exchanges Make Sense Anymore?") they pointed to the software license as being non-transferable, and therefore requiring a new license from the vendor to be legitimate.
I believe this is still an open legal argument waiting to be tested. They claim it, but no one has fought it yet. In federal law there is a concept called right of resale. Note article dates when reading: http://www.washingtontechnology.com/news/14_11/federal/758-1.html http://articles.corporate.findlaw.com/articles/file/00353/009275 The grey area starts when they point out IOS is software and software has had a limited success with licenses that claim ownership of your soul and other crazy ideas in some legal arenas. There's ample analogies where this would be arguable: You can't sell your car with the diagnostics software that's built in under the hood and removing it makes the car not turn on? The one that would take some fund-age to argue is: non-transferable where non-transferable makes the hardware a huge rock and impedes on my right of resale. You devalued what I purchased by claiming I can't give it as-is to someone else for money. Claims of "write your own IOS to run on the hardware" or pointing to fledgling open-source attempts to circumvent this problem are ludicrous at best. Currently, I practice and encourage: "(re)license it unless you want to be the company that takes Cisco to court to question their right to say non-transferable." With all of the sympathetic facial expressions I can give that it's a silly idea. There was a good news article about Cisco and Netapp both doing this a while back though. I can't find it now and I don't know if Netapp still does it. Gerald (Not a lawyer, and not going to court on your behalf.)
At 5:40 PM -0400 8/16/04, Gerald wrote:
I believe this is still an open legal argument waiting to be tested. They claim it, but no one has fought it yet. In federal law there is a concept called right of resale. Note article dates when reading: http://www.washingtontechnology.com/news/14_11/federal/758-1.html http://articles.corporate.findlaw.com/articles/file/00353/009275 ...
If you buy a serious quantity of equipment (say: $10M+ each year) and are particularly annoying (who, me? ;-), then you can take some amusing contractual steps to insure that you receive the full useful value of your equipment purchases... This includes including the post-warranty maintenance plan costs in your total ownership cost comparison (to bring useful life out to match the three or five years deprecation life) and it also means requiring vendors to allow clear transfer in those cases where you need to remove equipment from the network and they turn down the option to buy it back themselves... Contractual mechanisms work very well, and all it takes is a willingness to turn away vendors in the lobby who otherwise thought you'd buy gear that loses half its resale value on receipt... /John
Mikael Abrahamsson wrote:
Well, with the GSR (and alike) you're paying for high MTBF, large buffers and quick re-routing when something happens, so yes, this is a quality issue and that's why you should care and make an informed decision.
Do you have data to back up the above claims? Pete
Well, with the GSR (and alike) you're paying for high MTBF, large buffers and quick re-routing when something happens, so yes, this is a quality issue and that's why you should care and make an informed decision.
There's more than one way to do things. Some people manage MTBF by having more cheaper boxes in a resilient architecture so that the failure of a box has minimal impact on the transport of packets. Some people don't have buffers in their routers because they provide a consistently low latency service (low jitter). Some people do rerouting at the SDH layer so that routers don't need to reroute. Or they put a lot of effort into managing their lower layers so that failures happen very infrequently and therefore routers don't need to reroute. To make a truly informed decision you need hard data on network performance. Brands and models of routers are irrelevant. When I look at point-to-point latency graphs on a network and see constantly varying latency in almost a sine wave pattern, I know that the provider is doing something wrong. I may not know whether it is too-large buffers on the routers, congested circuit, or poorly managed underlying ATM/FR network, but the data tells the true story. If you care about quality, don't buy unless you can see hard data on the network's performance over a reasonable time period, i.e. 6 months to a year. And not everybody needs to care about quality that much. -Michael Dillon
At 3:05 PM -0400 8/16/04, Patrick W Gilmore wrote:
Perhaps some of your assumptions are wrong. Perhaps people are making due with OC48s. Perhaps there is less redundancy or more loading. Perhaps your discount level is too low.
Who knows? Did you build an OC192 network with 6 routers and 3 links and etc.? I didn't, so maybe I'm wrong. But given the choices of A) Every single network on at least two continents is selling for less than half their cost or B) An one page e-mail to NANOG may not reflect the complex business realities of the telecommunications world - well, I'll pick B.
Amazingly, the term "cost" actually has different contexts, and these greatly impact the final numbers. For example, the cost model used to justify a given price to a customer can be "fully-loaded/fully-allocated" or simply "incremental"... The fully-loaded one will result in the same unit cost every time, whereas the incremental one often doesn't recover the cost of past investment in the network. Of course, if that investment is several years old, has been through a bankruptcy or two, then it might not really matter (until a customer sale results in having to do some new spending for additional capacity...) Do you take on customers at rock-bottom prices which barely cover your out-of-pocket expenses, your payroll, and interest payments, or do you let them go to your competition because no revenue is better than revenue which doesn't let you cover the network growth in 3 or 4 years? This is a question which is being discussed at quite a few ISP's today... /John
On Mon, Aug 16, 2004 at 04:56:46PM -0400, John Curran wrote: [snip]
Do you take on customers at rock-bottom prices which barely cover your out-of-pocket expenses, your payroll, and interest payments, or do you let them go to your competition because no revenue is better than revenue which doesn't let you cover the network growth in 3 or 4 years? This is a question which is being discussed at quite a few ISP's today...
Bing! Not to mention cost of acquiring and retaining the customer. Hint: if they are shopping only by price, then lock them into a long contract, else they will go shopping for the next guy who happens to have cheaped out some other elements of the network or beat you on an ebay bid or hired a fresh pack of smart kids who never had a job before. The cost of customer churn is very, very high. There's some fishing anology here: trying to collect the bottom- feeders often nets you nothing more than churn and muck. -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
I just want my bits to flow quickly and reliably. I don't really care if you do it on Juniper, Force10, cisco, or tin-cans-and-string.
Why do you care?
Because of the value proposition inherent in certain manufacturers' products whether they work properly or not. After all, the Street wants to see that the network is poised to leap forward into the next paradigm by setting up a strategic alliance with another company which is already so poised. Silly rabbit. ;-) Now excuse me while I soak my hands in bleach for having typed this.
-- TTFN, patrick
rabbit. ;-) Now excuse me while I soak my hands in bleach for having typed
I'd hate to hear what you have to do if you read that out loud. :) Just to be on-topic: I think the question of what equipment the network is running for the purposes of a customer savvy enough to know the difference between a 12000 network or a 7xxx network or what-have-you, would be able to mitigate a vast many of these concerns by being multihomed correctly. Such a customer would be able to see significant cost improvements and not see much in the way of penalties -- e.g. reconvergence issues. Two pieces of equipment with low MTBFs may exceed a single piece of equipment with a high MTBF's availability overall. On-topic, but slightly different: Other than packet buffer depths and some theoretical ACL limits, is there any reason why a 7600 network would be worse than a 12000 built one? MTBF, reconvergence and other issues should all be pretty nice and like others have mentioned packet buffers are not necessarily a good thing <tm>. Throughput-wise, a 7600 should be able to hold its own against a 12000 provided we are talking about 40Gb/s blades and SUP720s. Deepak Jain AiNET
On Mon, 16 Aug 2004, Deepak Jain wrote:
Other than packet buffer depths and some theoretical ACL limits, is there any reason why a 7600 network would be worse than a 12000 built one? MTBF, reconvergence and other issues should all be pretty nice and like others have mentioned packet buffers are not necessarily a good thing <tm>. Throughput-wise, a 7600 should be able to hold its own against a 12000 provided we are talking about 40Gb/s blades and SUP720s.
I've had this discussion a few times with people working at cisco. The answers I usually get has to do with how well it handles overload, ie what happens when ports go full. If you want to be able to do single TCP streams at 5 gigabit/s over your long-haul 10gig network that is already carrying a lot of traffic, you need deep packet buffers. If your fastest customer is less than 1gig and your network is 10gig, you do not. So, if I were to provision a transatlantic line that cost me a lot of money, I would use a GSR or a juniper. If I were to provision a 80km dark fiber between two places where I already own 24 pairs, there is a wide choice in equipment. -- Mikael Abrahamsson email: swmike@swm.pp.se
I've had this discussion a few times with people working at cisco. The answers I usually get has to do with how well it handles overload, ie what happens when ports go full.
If you want to be able to do single TCP streams at 5 gigabit/s over your long-haul 10gig network that is already carrying a lot of traffic, you need deep packet buffers. If your fastest customer is less than 1gig and your network is 10gig, you do not.
So, if I were to provision a transatlantic line that cost me a lot of money, I would use a GSR or a juniper. If I were to provision a 80km dark fiber between two places where I already own 24 pairs, there is a wide choice in equipment.
Maybe I am wrong here, but what does the router's packet buffers have to do with a TCP stream? Buffers would add jitter and latency to the pipe. Wouldn't a 5Gb/s TCP stream over 3000+ miles imply huge buffers on the sender and receiver side? Since when do the routers buffers make a difference for that? If your application is such that jitter and latency don't matter, buffers are great. If dropping a packet on congestion is worse than queuing it, also great. But how does that improve the stream's performance otherwise? "What happens when ports go full" are you implying some kind of HOL problem in the 7600? DJ
On Tue, 17 Aug 2004, Deepak Jain wrote:
Maybe I am wrong here, but what does the router's packet buffers have to do with a TCP stream? Buffers would add jitter and latency to the pipe.
Have you tried running a single TCP stream over a 10 meg ethernet with a 5 megabit/s policer on the port? Do that, figure about what happens and explain to the rest of the class why this single TCP stream cannot use all of the 5 megabit/s itself.
Wouldn't a 5Gb/s TCP stream over 3000+ miles imply huge buffers on the sender and receiver side? Since when do the routers buffers make a difference for that? If your application is such that jitter and latency don't matter, buffers are great. If dropping a packet on congestion is worse than queuing it, also great. But how does that improve the stream's performance otherwise?
"What happens when ports go full" are you implying some kind of HOL problem in the 7600?
I'm implying that a 7600 with non-OSM doesn't have more than a few ms of buffers making a single highspeed TCP stream go into saw-tooth performance mode via it's congestion mechanism being triggered by packet loss instead of via change in RTT. Yes, the GSR/juniper with often 500+ ms buffers are often of no use in todays world, but it's nice to have 25ms buffers anyway, so TCP has some leeway. If you have thousands of TCP streams it doesn't matter, then small packet buffers will simply act as a high-speed policer when the port goes full and they'll be able to fill the pipe together anyway. -- Mikael Abrahamsson email: swmike@swm.pp.se
Have you tried running a single TCP stream over a 10 meg ethernet with a 5 megabit/s policer on the port? Do that, figure about what happens and explain to the rest of the class why this single TCP stream cannot use all of the 5 megabit/s itself.
That's entirely a different example. If we are talking about a stream that is _exactly 5Gb/s or _exactly_ 5mb/s, the policer won't be hit. In the example we are talking about below, an _approximately_ 5Gb/s stream on an _approximately_ full pipe the performance will be significantly better than you imply. And I have customers that do it pretty regularly (2 ~500Mb/s streams per GE port - telemetry data) on their equipment with very small buffers (3550s).
I'm implying that a 7600 with non-OSM doesn't have more than a few ms of buffers making a single highspeed TCP stream go into saw-tooth performance mode via it's congestion mechanism being triggered by packet loss instead of via change in RTT.
Yes, the GSR/juniper with often 500+ ms buffers are often of no use in todays world, but it's nice to have 25ms buffers anyway, so TCP has some leeway.
Yes, if you are trying to fill your pipe for more than a few miliseconds and are schooling your GSR/Juniper to drop or prevent queuing beyond say 50ms, that might be a useful improvement. Not that anyone does that.... I suppose your example of transoceanic connectivity vs an 80km span was an example where a congestion case would exist for a long time rather than a decent upgrade plan. I guess that is a spend more on HW vs spend more on connectivity model -- or trust that C or J overengineered so the network doesn't have to be properly engineered [by assumption].
If you have thousands of TCP streams it doesn't matter, then small packet buffers will simply act as a high-speed policer when the port goes full and they'll be able to fill the pipe together anyway.
Agreed. I guess it depends where you want to spend your engineering dollars. If your interfaces are pretty small and subject to bursting to wirespeed often and they somehow make it into your core [and are not dropped by your aggregation gear with its smaller buffers] then you can queue it. If you run a network where your bursts disappear by the time they hit your core [either because of statistical aggregation or simply being dropped by the smaller interface buffers along the way] or you have ample capacity or you have engineered properly sized core trunks, its not an issue. I hope most fall into this category, but I could be wrong. DJ
On Wed, 18 Aug 2004, Deepak Jain wrote:
the example we are talking about below, an _approximately_ 5Gb/s stream on an _approximately_ full pipe the performance will be significantly better than you imply. And I have customers that do it pretty regularly (2 ~500Mb/s streams per GE port - telemetry data) on their equipment with very small buffers (3550s).
Well, my experience is that 500 meg on a gig link background, and then a single highspeed tcp stream on top of that, it's basically the same thing as putting a 500 meg policer on it. And on a 500 meg policer on a gig link and trying to go as fast as you can with a gig-connected machine, you won't be able to use the remaining 500 meg, you'll get 200-300 meg.
I suppose your example of transoceanic connectivity vs an 80km span was an example where a congestion case would exist for a long time rather than a decent upgrade plan. I guess that is a spend more on HW vs spend more on connectivity model -- or trust that C or J overengineered so the network doesn't have to be properly engineered [by assumption].
Yes, that is exactly what I mean. If connectivity is expensive, spend more on what you connect to that connectivity, if connectivity is cheap, buy two and buy cheaper things to connect to it. -- Mikael Abrahamsson email: swmike@swm.pp.se
Deepak Jain wrote:
Have you tried running a single TCP stream over a 10 meg ethernet with a 5 megabit/s policer on the port? Do that, figure about what happens and explain to the rest of the class why this single TCP stream cannot use all of the 5 megabit/s itself.
That's entirely a different example. If we are talking about a stream that is _exactly 5Gb/s or _exactly_ 5mb/s, the policer won't be hit. In the example we are talking about below, an _approximately_ 5Gb/s stream on an _approximately_ full pipe the performance will be significantly better than you imply. And I have customers that do it pretty regularly (2 ~500Mb/s streams per GE port - telemetry data) on their equipment with very small buffers (3550s).
The required buffer size depends on the RTT of the TCP stream going over it. If you have the 3550 with small buffers and 5ms TCP RTT then everything is well. If you have the 3550 with small bufferns and 200ms TCP RTT you will run into troubles. -- Andre
I'm implying that a 7600 with non-OSM doesn't have more than a few ms of buffers making a single highspeed TCP stream go into saw-tooth performance mode via it's congestion mechanism being triggered by packet loss instead of via change in RTT.
Yes, the GSR/juniper with often 500+ ms buffers are often of no use in todays world, but it's nice to have 25ms buffers anyway, so TCP has some
I hate following up on my own message, so I'm following up on this. A point just raised privately was that *IF* you need the buffers you could just OSM the ports under stress [say the ones dedicated to the 1 or 2 expensive WAN links you may want to run near their top]. Considering a 4 port GE-WAN OSM is $800 on Ebay, I don't see how its even a pricing consideration. DJ
Mikael Abrahamsson wrote:
Would you pay $10 more per megabit to buy this capacity from someone using 12000 than from someone using let's say 7600 routers? That's something people will have to start to figure out the way we're headed here.
You should take more care on picking your examples... Both the mentioned subjects are fairly straightforward to configure in a way they flame out. However on the subject at hand, to a knowledgeable buyer there is value on operational specifications the network commits to (like latency, jitter, packet loss, reconvergence time, etc.) To an occasional buyer ("end user") the value of a brand name is more significant. Pete
On Mon, 16 Aug 2004, Mikael Abrahamsson wrote:
On Mon, 16 Aug 2004, Patrick W Gilmore wrote:
Unfortunately, I doubt any transit provider offering these prices will tell us if they are below cost. (Someone care to prove me wrong? :-)
Cisco 12400 OC192 cards are $225k listprice.
of course, if you wait for someone to go bankrupt then buy them you can buy the entire company and network for about that price :) Steve
Stephen J. Wilcox wrote:
of course, if you wait for someone to go bankrupt then buy them you can buy the entire company and network for about that price :)
I did hear about an isp called optigate.net (coarsegold, CA) that went bankrupt quite recently ... [at least, an ex optigate customer emailing out of a dynamic dsl ip who ran into our filters told me optigate had shut down suddenly ...] You might not want their IP space though, if you propose to put mailservers on it.
Those are apples & oranges. You cannot compare bandwidth in countries without the same fiber infrastructure as the US ( and with government owned PTTs controlling almost all access to the US market.
Bang on! U.S. prices reflect a mostly complete disintermediation of the telecom industry in that the provider who sells you transit probably also owns the fiber in the ground and is able to specify the entire suite of technology and operations between the glass strands and IP transit. So rather than reflecting slim margins, perhaps the prices reflect sensible cost structures. Let's not forget that it was only about ten years ago that telcos were able to get away with selling telecom services in which 75% or more of their cost base was in billing systems and overhead.
Anyway, I suspect "more turbulence in the industry" for the next few millennia, no matter where prices are. :-)
It will take at least a generation before the people who once experienced grossly overinflated margins are all gone and people stop trying to recreate the golden age of telecom again. Think highways and gas pipelines and electrical grids. IP transit networks are a utility and they should be cheap, ubiquitous and reliable. Anyone who wants to get rich in this business should be looking at value added services and not transit. It won't be long before IP transit is a real commodity and everyone will have the same cost structure and prices across the board. There will be margin for well-run IP transit utilities but no more boom times. --Michael Dillon
William B. Norton wrote:
The Cost of Internet Transit in… Commit AU SG JP HK USA 1 Mbps $720 $625 $490 $185 $125 10 Mbps $410 $350 $150 $100 $80 100 Mbps $325 $210 $110 $80 $45 1000 Mbps $305 $115 $50 $50 $30
As mentioned before, Europe is about the same as US. With these US street prices in mind, how can anyone justify paying prices of some commercial exchanges (the last offer I got from PAIX Palo Alto was USD 5500 per month for a FE port about a year ago, and Equinix Ashburn was not much cheaper). Please note: I'm not talking of the technical advantages of peering. Fredy Künzler Init Seven AG, AS13030
With these US street prices in mind, how can anyone justify paying prices of some commercial exchanges (the last offer I got from PAIX Palo Alto was USD 5500 per month for a FE port about a year ago, and Equinix Ashburn was not much cheaper). Please note: I'm not talking of the technical advantages of peering.
Or, perhaps the better question is. How can one justify the cost of _public_ peering when fiber cross-connects are $200-$300/month each. That is at least 20-40 fiber direct connects [twice that if you & your peers split the cost of cross-connects]. If you only need 1Gb/s of cross-connect capacity you can take a 3x50 switch [or use it as a router] and terminate all of the peering sessions on it or via VLAN-trunking directly on your real router [C/J/what have you]. Your hardware cost is marginally increased and your capacity is MANY times larger. I don't think there are too many exchanges anymore that have 80+ active peers. If you do participate in such an exchange, have 80 peers on it, and don't exceed a single port's speed, shame on you. :) DJ
* deepak@ai.net (Deepak Jain) [Wed 18 Aug 2004, 18:52 CEST]:
Or, perhaps the better question is. How can one justify the cost of _public_ peering when fiber cross-connects are $200-$300/month each.
Perhaps not at the site previously mentioned. I believe fiber crossconnects are cheaper than that at the various AMS-IX housing sites but people still choose to connect to the exchange switch. Bushes of private interconnects tend to quickly become unmanageable (and no, not just those of "throw wire over wall" discussed here some months ago - that's not allowed at any AMS-IX housing site).
I don't think there are too many exchanges anymore that have 80+ active peers. If you do participate in such an exchange, have 80 peers on it, and don't exceed a single port's speed, shame on you. :)
AMS-IX has almost 200 connected parties. Luckily hardly anybody is trying to suck more traffic through their port than it can physically handle. Not everybody has a gigabit per second worth of traffic. Some even make do with a 10baseT connection (full duplex of course :). Apparently still a worthwhile proposition in a world of falling transit prices. -- Niels. -- Today's subliminal thought is:
On Wed, 18 Aug 2004, Fredy Kuenzler wrote:
With these US street prices in mind, how can anyone justify paying prices of some commercial exchanges (the last offer I got from PAIX Palo Alto was USD 5500 per month for a FE port about a year ago, and Equinix Ashburn was not much cheaper). Please note: I'm not talking of the technical advantages of peering.
You cant, perhaps they'll realise that before they become deprecated Steve
participants (19)
-
alex@pilosoft.com
-
Andre Oppermann
-
Daniel Roesen
-
Deepak Jain
-
Frederic NGUYEN
-
Fredy Kuenzler
-
Gerald
-
Joe Provo
-
John Curran
-
Matthew Kaufman
-
Michael.Dillon@radianz.com
-
Mikael Abrahamsson
-
Niels Bakker
-
Patrick W Gilmore
-
Petri Helenius
-
Stephen J. Wilcox
-
Suresh Ramasubramanian
-
Vincent J. Bono
-
William B. Norton