Currently about 5%-15% of my traffic gets routed over the UtahREP.
please describe measurement technique.
I think there are actually a couple of different traffic measurements of interest. traffic volume, traffic elasticity, and traffic usability. Traffic volume is fairly simple, but also mostly useless. Measure the total traffic exchange through the local exchange point (e.g. peak 2Mbps in St. Louis) and that is a definition of traffic exchanged by the local ISPs that isn't going across transit lines to upstream providers. Harder to measure is how 'local' any of the ISPs actually are. In the st. louis case, all of the ISPs operate in at least two states, and two of them operate in more than two states. None of them, as far as I know, have tried to break up their routing into geographic regions. Traffic elasticity is an interesting issue. How much traffic is being exchanged, which wouldn't otherwise be exchanged? In other words is the existance of the local exchange point actually causing more traffic to be generated. This is a what if question. If you didn't have the local exchange, would you still haul highly elastic traffic like USENET across your long-haul links? Or is it highly elastic traffic like at-home students or employees who use a local ISP modem pool for access instead of dialing directly into the remote institution. And finally, usability. The I know it when I see it issue. The right combination of adequate speed, low latency, and little congestion that gives the end-user a 'good' connection. Since we still have a hard time defining what is 'good' this is the hardest one to measure. I can really only measure this indirectly, such as the number of customer compliants or through surveys of non-customers. In general, customers of ISPs connected to the local exchange point report better connections to resources on ISPs also attached to the local exchange point than to those same ISPs before the exchange point. -- Sean Donelan, Data Research Associates, Inc, St. Louis, MO Affiliation given for identification not representation
Sean Donelan writes:
Currently about 5%-15% of my traffic gets routed over the UtahREP.
please describe measurement technique.
[snip]
Traffic elasticity is an interesting issue. How much traffic is being exchanged, which wouldn't otherwise be exchanged? In other words is the existance of the local exchange point actually causing more traffic to be generated. This is a what if question. If you didn't have the local exchange, would you still haul highly elastic traffic like USENET across your long-haul links? Or is it highly elastic traffic like at-home students or employees who use a local ISP modem pool for access instead of dialing directly into the remote institution.
[snip]
And finally, usability. The I know it when I see it issue. The right combination of adequate speed, low latency, and little congestion that gives the end-user a 'good' connection. Since we still have a hard time defining what is 'good' this is the hardest one to measure. I can really only measure this indirectly, such as the number of customer compliants or through surveys of non-customers. In general, customers of ISPs connected to the local exchange point report better connections to resources on ISPs also attached to the local exchange point than to those same ISPs before the exchange point.
Something of interest here might be centralising services at NAPs. For example, putting a news server at the NAP running Cylone, the NAP purchasing a news only T1 (or whatever) to serve the box, and then participants who would like a news feed getting it directly from this box and paying extra. Another intersting saving is via proxy neighboring over a local NAP. Rather significant traffic savings have been observed in local NAPs inside Australia by participants neighboring squids. AU being a rather heavy user of caching compared to the US. Other ideas were tossed around including a central DNS server/cache, gaming servers, etc.. and one peering network inside AU (Ausbone) is doing this. (But they provider inter-NAP links, which isn't really applicable here in this discussion). Just out of curiousity, since I'm not in the US, how much would a T1 cost point to point inside a city, without default IP transit? With IP transit? T3? Anything else 'common' ? Adrian
Something of interest here might be centralising services at NAPs. For example, putting a news server at the NAP running Cylone, the NAP purchasing a news only T1 (or whatever) to serve the box, and then participants who would like a news feed getting it directly from this box and paying extra.
Another intersting saving is via proxy neighboring over a local NAP. Rather significant traffic savings have been observed in local NAPs inside Australia by participants neighboring squids. AU being a rather heavy user of caching compared to the US.
Other ideas were tossed around including a central DNS server/cache, gaming servers, etc.. and one peering network inside AU (Ausbone) is doing this.
This was discussed several times in the UK but politics always seemed to prevent setting this sort of thing up at LINX (as far as I could tell). Others probably know more. Certainly the idea seemed very sensible. Manar
On Wed, 3 Jun 1998, Adrian Chadd wrote:
Something of interest here might be centralising services at NAPs. For example, putting a news server at the NAP running Cylone, the NAP purchasing a news only T1 (or whatever) to serve the box, and then participants who would like a news feed getting it directly from this box and paying extra.
I went one step further and talked to some of the local ISPs about pooling some $ and having a NAP-owned nnrp server. Why waste bandwidth and hardware duplicating a beast such as usenet servers if its not necessary. The ones I talked to agreed it seemed a good idea, but nothing ever came of it.
Just out of curiousity, since I'm not in the US, how much would a T1 cost point to point inside a city, without default IP transit? With IP transit?
In BellSouth land, I know a local point to point T1 circuit can be had for about $300/month ($1700 install). With CLEC's getting into the business of selling circuits (especially if they have their own fiber) prices can be whatever they want to charge...usually less than the ILEC :) ------------------------------------------------------------------ Jon Lewis <jlewis@fdt.net> | Spammers will be winnuked or Network Administrator | drawn and quartered...whichever Florida Digital Turnpike | is more convenient. ______http://inorganic5.fdt.net/~jlewis/pgp for PGP public key____
Jon Lewis writes:
On Wed, 3 Jun 1998, Adrian Chadd wrote:
Something of interest here might be centralising services at NAPs. For example, putting a news server at the NAP running Cylone, the NAP purchasing a news only T1 (or whatever) to serve the box, and then participants who would like a news feed getting it directly from this box and paying extra.
I went one step further and talked to some of the local ISPs about pooling some $ and having a NAP-owned nnrp server. Why waste bandwidth and hardware duplicating a beast such as usenet servers if its not necessary. The ones I talked to agreed it seemed a good idea, but nothing ever came of it.
As with anything, you need to kick people to get it to happen. The peering points here definitely took one person to get off their hide and get things happening. When its working in a few places, everyone does it. :-)
Just out of curiousity, since I'm not in the US, how much would a T1 cost point to point inside a city, without default IP transit? With IP transit?
In BellSouth land, I know a local point to point T1 circuit can be had for about $300/month ($1700 install). With CLEC's getting into the business of selling circuits (especially if they have their own fiber) prices can be whatever they want to charge...usually less than the ILEC :)
Nice. I'm not quoting AU prices here, again :) Adrian
thanks for thinking and writing it down.
I think there are actually a couple of different traffic measurements of interest. traffic volume, traffic elasticity, and traffic usability.
Traffic volume is fairly simple, but also mostly useless. Measure the total traffic exchange through the local exchange point (e.g. peak 2Mbps in St. Louis) and that is a definition of traffic exchanged by the local ISPs that isn't going across transit lines to upstream providers. Harder to measure is how 'local' any of the ISPs actually are. In the st. louis case, all of the ISPs operate in at least two states, and two of them operate in more than two states. None of them, as far as I know, have tried to break up their routing into geographic regions.
Traffic elasticity is an interesting issue. How much traffic is being exchanged, which wouldn't otherwise be exchanged? In other words is the existance of the local exchange point actually causing more traffic to be generated. This is a what if question. If you didn't have the local exchange, would you still haul highly elastic traffic like USENET across your long-haul links? Or is it highly elastic traffic like at-home students or employees who use a local ISP modem pool for access instead of dialing directly into the remote institution.
And finally, usability. The I know it when I see it issue. The right combination of adequate speed, low latency, and little congestion that gives the end-user a 'good' connection. Since we still have a hard time defining what is 'good' this is the hardest one to measure. I can really only measure this indirectly, such as the number of customer compliants or through surveys of non-customers. In general, customers of ISPs connected to the local exchange point report better connections to resources on ISPs also attached to the local exchange point than to those same ISPs before the exchange point. -- Sean Donelan, Data Research Associates, Inc, St. Louis, MO Affiliation given for identification not representation
participants (5)
-
Adrian Chadd
-
Jon Lewis
-
Manar Hussain
-
Randy Bush
-
Sean Donelan