Re: Traffic Engineering [was Chanukah [was Re: Hezbollah]]
Kent, As a former network design guy who's done traffic engineering and design (and redesign) on many networks (Internet and otherwise), I disagree that traffic engineering doesn't work for the Internet. I've seen many people go with the "throw bandwidth at the problem" as a cure all. While it tends to work, it tends to be the most expensive method of solving the problem. Doing traffic engineering right is hard. The telcos have it down pat for their voice networks, and telco-based ISPs often have applied this design expertise to their ISP network. Having a person do traffic engineering can save the ISP big bucks. The traffic engineering techniques I'm talking can't handle wildly dynamic situations. For example, a news event like Princess Di's death greatly increases traffic to/from England which plays temporary havoc with forecasted traffic projections. However, outside of these anomalies, traffic projections work pretty well. I've outlined a basic technique below which works for many types of networks, and has some ISP specific steps. The key to this analysis is that it takes into account the underlying traffic flows and then determines the appropriate physical backbone topology, or the changes to be made to an existing topology. This is directly in contrast to the "throw bandwidth at the problem" case that patches a backbone topology which might be sub-optimal in the first place. Here's the overall outline: 1. Divide your network into a small number of geographic areas (between five and ten). Each geographic area you choose probably has a large city that serves as a major traffic source for that area. These cities are usually the natural cities for backbone connectivity. Create an NxN matrix, where N is the total number of areas in your network. Each cell in the matrix will represent the total traffic demand between each source/destination geographic area. There are several factors which effect this matrix, each of which will be discussed below. 1. The locality of traffic. 2. The typical utilization of customers. 3. The entry/exit points of traffic from your network. 2. Identify which % of traffic, if any, has regional locality. For pure Internet traffic, the probability that the source and destinatino of traffic are within the same metropolitan area tends to be low (10% or lower for metros within the US). However, there are exceptions. Telecommuting applications tend to have very high locality. People close to work dial into work through an ISP, so both the source and destination of traffic tends to stay local (70% or higher). Places like the Bay Area in California also tend to have higher traffic locality. This is because the Bay Area has lots of Internet users (which tend to be traffic sinks), and lots of web sites (which tend to be traffic sources). ISPs outside of the US tend to have a higher percentage of traffic staying within the country, especially non-english speaking countries. 3. Measure/estimate the typical utilization of customers. Utilization needs to be measured/estimated in both send and receive directions. Dial-up users typically receive almost seven times as much as they send. Corporate customers not doing telecommuting applications tend to receive about four times as much as they send (less because corporations have web sites that others access). Web server farms have the opposite characteristics of dial-up users. Percentage utilization tends to increase with bandwidth. In the U.S., a T1 customer connection typically has a peak recieve utilization of 20% or less. However, a DS3 customer can easily have a receive utilization of 50% or more. Simple explanation is that someone paying big bucks for a DS3 wants to make sure it is justified. So, take the total number of users in each area, the connection speeds and customer types, multiply by the appropriate factors, and you get the total demand you are trying to serve out of each area. Take this traffic demand, and multiply it by the non-local traffic. This represents the total traffic that you need to get either in/out of the network, or in/out of this particular part of your network. 4. Determine the entry/exit points for traffic with your network, and its effect upon the traffic matrix. How do you setup your routing policies? Many ISPs use nearest exit. If the nearest exit is in the same geographic area, the traffic sent by your customers does not affect any other part of the overall traffic matrix. If the nearest exit is not within the same geographic area, determine the area where this traffic will be sent. Enter this value in the appropriate source/destination box of the traffic matrix. It gets harder when peering with many other ISPs, some of whom you connect to in the same area, and others in remote areas. In this case, determine which percentage of the traffic goes into each particular region, and The main traffic sources into your network (excluding your customers) are your peering points (both public and private). The amount of traffic from each peering point is measurable. You can generally estimate that this traffic is to be distributed proportional to the overall traffic demand in each geographic area. This is a significant amount of matrix math, but the overall concept is simple. Determine the overall flow between one part of your network to another. 5. With me so far? Good, now it's time to design your backbone to handle your demands. You can use dedicated lines or layer two services such as Frame Relay or ATM. The simplicity of using Frame Relay or ATM is that the circuits you need between each geographic area has been defined by your traffic matrix. This is part of the appeal of using public L2 services for a backbone. Designing your own backbone is a bit harder. The actual topology tends to be straightforward--you need to connect up the major cities in each of the geographic areas. For five areas, a simple ring suffices. For up to 10 areas, this tends to be rings bisected once or twice. The real work in designing your own backbone is in satisfying the traffic demands going across your network. Remember that geographic areas in the center of your network have to carry the traffic demands going across your network. This imposes a heavier burden in the center of the network than the traffic matrix would indicate. You also have to worry about resiliency, having sufficient bandwidth when the backhoes go fiber hunting, etc. 6. Design the network within each geographic areas. The steps for designing the network within each geographic area tend to be similar to that of designing the overall network. Breaking the overall design process into a regional network and backbone network makes the problem more tractable. 7. Measure data from a real network. This is really important. You've made lots of assumptions. Regularly check the overall traffic to see if it matches the assumptions. Refine the traffic matrix to see if it still represents reality. Create trendlines which show the overall traffic changes to/from each area, and project these trendlines into the future. You will tend to have pretty good certainty about 4 months into the future, with the value of the information decaying after that. Use this data to determine where to add additional peering points. Estimate what impact this would have on the traffic matrix. 8. Factor the measured and projected data into the next network backbone design. This next backbone design gives you the optimum backbone given the underlying flows in your network. See what changes you need to make to your backbone to get to this new optimum backbone, and order the circuits. Phew! Like I said earlier, it is hard to do right, and I've left out quite a few details in the above outline. But having been there, done that, (quite a few times) I can say it really works. And it saves ISPs money! Question for NANOG members. How important is traffic engineering given that it is fairly hard to do properly and you folks have enough other things to think about? Prabhu Kavi IP Business Marketing Manager Ascend Communications prabhu.kavi@ascend.com ______________________________ Reply Separator _________________________________ Subject: Chanukah [was Re: Hezbollah] Author: "Kent W. England" <kwe@geo.net> at smtplink Date: 9/16/97 2:09 PM At 05:03 PM 14-09-97 -0400, Dorian R. Kim wrote:
... One of the things that needs to be engineered into building and maintaining national/international backbones is traffic accounting to an arbitrary granularity that paves the way for better traffic engineering and bandwidth projections. There are already ample tools to to per-prefix matrix of traffic right now. Tying this in with good sales projections will alleviate much of the last minute fire fighting.
This will most likely never be 100% accurate and precise, but there is no reason why we can't get a better handle on bandwidth forecasts. (say to 95% percentile)
Dorian; I don't want to throw cold water on the value of planning and foresight, but in terms of predicting traffic patterns it has never worked on the Internet. It sounds good and that was the argument that all the mainframe networkers made to us early Internet networkers -- Why can't you tell me upfront what your bandwidth requirements are going to be? Don't you know exactly how many terminals you have and where they are and what application keystrokes are going to be pressed at any given time? How else can you guarantee response time in your network? This Internet stuff is stupid. It'll never work. Somehow with the way that HTTP/HTML caught fire and Internet-CB (aka VocalTec and CUSeeMe) took off, I would be loath to think I could project my backbone needs with any reliability based on *historical* projections.
Furthermore, with the deployment of WDM and Internet core devices moving closer to the transmission gear, if you have access to fiber, getting more bandwidth may become as straightforward as using an additional wavelength on the ADM that your router's plugged into.
-dorian
This I like a lot better as a design technique. Throwing more bandwidth at the problem almost always works (unless the transport protocol is broken). Like Peter Kline said, Turn up the speed dial upon onset of congestion. Simple. Effective. Then again, creating a data architecture for the web (a problem that has been recognized, but not addressed in the last five years) would eliminate much of the backbone bandwidth demand. What would happen if -- presto -- a data architecture for the web showed up one day? A lot of backbone bandwidth would become surplus and a lot more edge bandwidth would be needed ASAP. What does that do to historical projections? --Kent
At 09:59 AM 9/17/97 -0500, pkavi@pcmail.casc.com wrote:
As a former network design guy who's done traffic engineering and design (and redesign) on many networks (Internet and otherwise), I disagree that traffic engineering doesn't work for the Internet.
Prabhu; Thanks for the treatise on network engineering. I agree that network engineering is useful and that there should be traffic engineers. However, the engineering managers must not forget to see the forest for the trees and must heavily discount the traffic engineering studies, when faced with out-of-the-box data. Here are some examples:
2. Identify which % of traffic, if any, has regional locality. For pure Internet traffic, the probability that the source and destinatino of traffic are within the same metropolitan area tends to be low (10% or lower for metros within the US).
This is true only so long as the density of the Internet is low. This is so because so long as the density is low, few of your neighbors will be on the Internet and therefore local issues are irrelevant. However, at some point, the density of the Internet gets to a critical point, say 30% to 40%. At that point a pizza parlor owner says to himself "two out of every five of my customers are on the Internet. Perhaps I need a web page." And, suddenly, pizza on the Net makes a lot of sense and the traffic patterns shift. As the density grows to 90%, local traffic becomes dominant over distant traffic. Another example is distributed web hosting. When distributed web hosting takes off, your backbone will be heavily discounted and your peripheral interconnect bandwidth will be woefully short. Web traffic will zoom as performance dramatically improves, but your backbone bandwidth will drop. That breaks your traffic model. So, by all means, do your traffic studies, but be prepared to throw them out or re-write them when the environment changes. Then throw bandwidth where it will do the most good. :-) --Kent
On Wed, Sep 17, 1997 at 10:19:04AM -0700, Kent W. England wrote:
Here are some examples:
2. Identify which % of traffic, if any, has regional locality. For pure Internet traffic, the probability that the source and destinatino of traffic are within the same metropolitan area tends to be low (10% or lower for metros within the US).
This is true only so long as the density of the Internet is low. This is so because so long as the density is low, few of your neighbors will be on the Internet and therefore local issues are irrelevant. However, at some point, the density of the Internet gets to a critical point, say 30% to 40%. At that point a pizza parlor owner says to himself "two out of every five of my customers are on the Internet. Perhaps I need a web page." And, suddenly, pizza on the Net makes a lot of sense and the traffic patterns shift. As the density grows to 90%, local traffic becomes dominant over distant traffic.
This is happening already, and it pushes one of my buttons _really_ hard. The geographic locality of reference of the current Internet is pathetic. When my telnet session from St Pete to Tampa, Florida, goes via MAE-East, or worse, MAE-_West_, there's something _seriously_ wrong. It's my analysis that the problem is that small (T-1 and below) customers should be buying their connectivity from (and there should _be_, for them to buy it from) a local exchange-type provider. IE: buy a T-3 up hill to, oh, say, the top 6 or 10 backbones, and then sell transit to local ISPs and IAPs in your geographic area. This doesn't seem to be technically difficult, and it seems like it ought to be pretty easy to sell... sure, you're one hop further from the backbone... but you're now two hops away from _10_. Sane? I should be looking for capital? :-) Are there any major potholes in this theory that I'm missing? Cheers, -- jra -- Jay R. Ashworth High Technology Systems Consulting Ashworth Designer Linux: Where Do You Want To Fly Today? & Associates ka1fjx/4 Crack. It does a body good. +1 813 790 7592 jra@baylink.com http://rc5.distributed.net NIC: jra3
It's my analysis that the problem is that small (T-1 and below) customers should be buying their connectivity from (and there should _be_, for them to buy it from) a local exchange-type provider. IE: buy a T-3 up hill to, oh, say, the top 6 or 10 backbones, and then sell transit to local ISPs and IAPs in your geographic area.
This doesn't seem to be technically difficult, and it seems like it ought to be pretty easy to sell... sure, you're one hop further from the backbone... but you're now two hops away from _10_.
Are there any major potholes in this theory that I'm missing?
A big problem here is that ISPs differentiate themselves based on who they buy bandwidth from. An ISP that has a T1 to CRL, say, benefits greatly when a larger competitor gets a T1 to CRL as well, but the larger competitor doesn't benefit if they already have multiple T1s and T3s to the larger backbones themselves. A better idea is a miniature NAP for the ISPs in each large metropolitan area for exchanging local traffic. Josh Beck - CONNECTnet Network Operations Center - jbeck@connectnet.com ----------------------------------------------------------------------- CONNECTnet INS, Inc. Phone: (619)450-0254 Fax: (619)450-3216 6370 Lusk Blvd., Suite F-208 San Diego, CA 92121 -----------------------------------------------------------------------
On Wed, Sep 17, 1997 at 12:23:38PM -0700, Josh Beck wrote:
Are there any major potholes in this theory that I'm missing?
A big problem here is that ISPs differentiate themselves based on who they buy bandwidth from. An ISP that has a T1 to CRL, say, benefits greatly when a larger competitor gets a T1 to CRL as well, but the larger competitor doesn't benefit if they already have multiple T1s and T3s to the larger backbones themselves. A better idea is a miniature NAP for the ISPs in each large metropolitan area for exchanging local traffic.
Forgive me, I guess I didn't phrase it well enough, as that's what I was trying to suggest. Although, I suppose, the exchanging, and the access to the backbones are two different things, the latter may well be more salable than the former, even though the former is better for the net at large. Cheers, -- jra -- Jay R. Ashworth jra@baylink.com Member of the Technical Staff Unsolicited Commercial Emailers Sued The Suncoast Freenet "People propose, science studies, technology Tampa Bay, Florida conforms." -- Dr. Don Norman +1 813 790 7592
On Wed, 17 Sep 1997, Jay R. Ashworth wrote:
It's my analysis that the problem is that small (T-1 and below) customers should be buying their connectivity from (and there should _be_, for them to buy it from) a local exchange-type provider. IE: buy a T-3 up hill to, oh, say, the top 6 or 10 backbones, and then sell transit to local ISPs and IAPs in your geographic area.
This doesn't seem to be technically difficult, and it seems like it ought to be pretty easy to sell... sure, you're one hop further from the backbone... but you're now two hops away from _10_.
Sane? I should be looking for capital? :-)
Are there any major potholes in this theory that I'm missing?
We do this already, but the main problem we run into is customers are hesitant to buy services from a direct competitor. Doesn't matter that we are cheaper or have good connectivity, but that they don't want to have to sell against who they are buying services from. The fact the MCI/Sprint/etc all sell local access doesn't seem to bother them as much. We have been successful doing this, but it is definitely a hard sell. And this is with the advantage that we are offering ATM connectivity to a local exchange service at cheaper prices than can be had anywhere around. John Tamplin Traveller Information Services jat@Traveller.COM 2104 West Ferry Way 205/883-4233x7007 Huntsville, AL 35801
On Wed, Sep 17, 1997 at 02:34:40PM -0500, John A. Tamplin wrote:
This doesn't seem to be technically difficult, and it seems like it ought to be pretty easy to sell... sure, you're one hop further from the backbone... but you're now two hops away from _10_.
Sane? I should be looking for capital? :-)
Are there any major potholes in this theory that I'm missing?
We do this already, but the main problem we run into is customers are hesitant to buy services from a direct competitor. Doesn't matter that we are cheaper or have good connectivity, but that they don't want to have to sell against who they are buying services from. The fact the MCI/Sprint/etc all sell local access doesn't seem to bother them as much.
Ok... so only sell to resellers? Does that cut too far into the profit margin?
We have been successful doing this, but it is definitely a hard sell. And this is with the advantage that we are offering ATM connectivity to a local exchange service at cheaper prices than can be had anywhere around.
Hmmm.... Thanks for the input. Cheers, -- jra -- Jay R. Ashworth jra@baylink.com Member of the Technical Staff Unsolicited Commercial Emailers Sued The Suncoast Freenet "People propose, science studies, technology Tampa Bay, Florida conforms." -- Dr. Don Norman +1 813 790 7592
On Wed, 17 Sep 1997, Jay R. Ashworth wrote:
We do this already, but the main problem we run into is customers are hesitant to buy services from a direct competitor. Doesn't matter that we are cheaper or have good connectivity, but that they don't want to have to sell against who they are buying services from. The fact the MCI/Sprint/etc all sell local access doesn't seem to bother them as much.
Ok... so only sell to resellers? Does that cut too far into the profit margin?
Well, that would mean dumping a known, profitable set of customers for a much smaller set of potential customers. Perhaps a startup could choose to target those customers, but we can't afford to throw away what we have built up. John Tamplin Traveller Information Services jat@Traveller.COM 2104 West Ferry Way 205/883-4233x7007 Huntsville, AL 35801
2. Identify which % of traffic, if any, has regional locality.
Are there any major potholes in this theory that I'm missing?
Locality of reference refers to the geographical separation between source and destination addresses on IP packets. If you think of the source/dest pairs as representing a virtual connection and these connections as representing a logical topology of the Internet, then the message that started this thread was explaining how to engineer your physical topology to best match the logical topology. Of course, because the Internet does not exist this physical topology is likely to always be sub-optimal in an engineering sense because the business decision that end users make to establish physical links to the Internet are made for economic and emotional reasons more than for engineering reasons. The only area in which engineering can be used to solve your hot button issue is that if providers in your local area build a good local exchange point then you may be able to persuade more local end users to connect to local providers in order to attain better local connectivity. ******************************************************** Michael Dillon voice: +1-650-482-2840 Senior Systems Architect fax: +1-650-482-2844 PRIORI NETWORKS, INC. http://www.priori.net "The People You Know. The People You Trust." ********************************************************
At 01:23 PM 9/17/97 -0700, Michael Dillon wrote:
The only area in which engineering can be used to solve your hot button issue is that if providers in your local area build a good local exchange point then you may be able to persuade more local end users to connect to local providers in order to attain better local connectivity.
TTraffic exchange is engineering tactical, not business strategic, for national backbones. For regional providers, I think it may become strategic and could be a flaw in the business plans of the backbone providers. If you take web traffic and local traffic out of the national backbone business plans, they have a lot less traffic than they expected. You build the local exchanges as the traffic builds. It is not economic for a group of national backbones to build local exchanges when it is cheaper to backhaul a couple megs of traffic a few hundred or thousand miles and save on the number of exchange points. In the metro areas, it is still often cheaper to backhaul (using CAP facilities) a few dozen miles than build a POP. Some RBOCs proposed NAPs in each LATA some years ago when the NAPs were put out to bid by NSF. This was obviously premature. It is still premature, although I bet there are more than a dozen LATAs where exchange is huge (most already have NAPs or MAEs and private exchanges). The local exchanges will be built -- it's just that we don't know whether they will be public or private. A public exchange makes sense if there is a sensible sponsor and lots of local providers with scale. If only a couple of national backbone providers have the required scale, then the local exchange will be a private interconnect. Remember, the Internet started off as a pervasive but low density technology. It was able to be pervasive only because it used the pervasive telephony infrastructure. It was and is low density because it started from zero. (Duh, you say. Right.) The telephony infrastructure started off entirely local. When there was high density local coverage, there was still no long distance. That came only with the Bell System. For anything that is two-way, pervasive (~100% coverage), and universal (~100% density) -- what will the traffic patterns be? The Internet is only the second two-way, universal, and pervasive technology ever. The first is the telephone. If you extrapolate from the first to the second, then the answer is that local traffic will dominate long-distance traffic. I think that this will be true in any event because the tactical issue of paying for bandwidth versus data storage will drive the Internet to local over long-distance traffic patterns, because bandwidth is more expensive than storage (for the foreseeable future). --Kent
On Wed, 17 Sep 1997, Jay R. Ashworth wrote:
This is happening already, and it pushes one of my buttons _really_ hard.
The geographic locality of reference of the current Internet is pathetic. When my telnet session from St Pete to Tampa, Florida, goes via MAE-East, or worse, MAE-_West_, there's something _seriously_ wrong.
What needs to happen is you need to find a fairly large city in your area, where there are many Internet Providers, and start a 'GOOD' local exchange. This can be hard to do. We found out that many people had a problem with this or that on trying to put together a local exchange. But in the end, it has paid off. Why has it paid off? Because we worked hard to get it to. Yes, we are not done with it as of yet, but we are working to make sure that in the end, it will be a solid exchange, http://utah.rep.net/ Some of the things that is making the Utah REP work. 1. Utah Ed Net is on. They do the connections for every k-12, Uni, CC, and Gov agency in UT. 2. The Larger Providers are on. We worked with the larger in-state providers to join in. 3. Low cost. We were/are not out to make money in this deal. Keep local Traffic Local... Now, this would not work if we had two or three local exchanges in Utah. The load would be spread out among all of them, and there is not that much traffic to justify it. I won't go into it much more here as most of the info is on the web page, but by getting providers to 'talk' to each other at a local exchange helps out. We currently have 12 talking with a few more coming on soon. Christian BTW, it has taken us about two years to get where we are today... And for those of you who care, here are the peaks from just local traffic -----------Last 12 Capture Intervals---------- Start Capture Time Peak Time Peak Mbps * 1. Wed Sep 17 00:00:00 Wed Sep 17 12:40:41 5 2. Tue Sep 16 00:00:00 Tue Sep 16 00:10:48 6 3. Mon Sep 15 00:00:00 Mon Sep 15 10:21:51 5 4. Sun Sep 14 00:00:00 Sun Sep 14 22:19:13 5 5. Sat Sep 13 00:00:00 Sat Sep 13 15:04:00 4 6. Fri Sep 12 00:00:00 Fri Sep 12 19:44:31 5 7. Thu Sep 11 00:00:00 Thu Sep 11 15:52:51 7 8. Wed Sep 10 00:00:00 Wed Sep 10 16:20:03 5 9. Tue Sep 09 00:00:00 Tue Sep 09 21:46:35 7 10. Mon Sep 08 00:00:00 Mon Sep 08 16:10:23 5 11. Sun Sep 07 00:00:00 Sun Sep 07 21:00:34 2 12. Sat Sep 06 00:00:00 Sat Sep 06 10:13:52 7
Jay R. Ashworth sez:
This is happening already, and it pushes one of my buttons _really_ hard.
The geographic locality of reference of the current Internet is pathetic. When my telnet session from St Pete to Tampa, Florida, goes via MAE-East, or worse, MAE-_West_, there's something _seriously_ wrong.
One major "Crazy Eddie" factor you're not acknowledging is: lines do not cost a linear amount per mile. Sometimes 'longer' costs way less, due to this magical artifact called a "Tariff"... (And Jay, if you start claiming there IS logic & reason in the tariff process, you will get a room next to John Hinkely..) -- A host is a host from coast to coast.................wb8foz@nrk.com & no one will talk to a host that's close........[v].(301) 56-LINUX Unless the host (that isn't close).........................pob 1433 is busy, hung or dead....................................20915-1433
Kent W. England wrote:
At that point a pizza parlor owner says to himself "two out of every five of my customers are on the Internet. Perhaps I need a web page." And, suddenly, pizza on the Net makes a lot of sense and the traffic patterns shift. As the density grows to 90%, local traffic becomes dominant over distant traffic.
Georgaphically local, not topologically. A *big* difference. Unless we're willing to go back to regulated monopolies geographical locality makes little difference in overall traffic patterns. --vadim
On Wed, Sep 17, 1997 at 12:44:00PM -0700, Vadim Antonov wrote:
At that point a pizza parlor owner says to himself "two out of every five of my customers are on the Internet. Perhaps I need a web page." And, suddenly, pizza on the Net makes a lot of sense and the traffic patterns shift. As the density grows to 90%, local traffic becomes dominant over distant traffic.
Georgaphically local, not topologically.
Precisely.
A *big* difference.
Unless we're willing to go back to regulated monopolies geographical locality makes little difference in overall traffic patterns.
How do you say "bullshit" in Russian? C'mon, Vadim. As the Net, and the Web in particular, grow more geographically dense -- IE: as there _is_ more local stuff for users to look at -- they _will_; people are natively more interested in that which is near to them geographically. And unless we unload that traffic from the backbones and the NAP's, _it_ will be what melts down the net. Cheers, -- jra -- Jay R. Ashworth High Technology Systems Consulting Ashworth Designer Linux: Where Do You Want To Fly Today? & Associates ka1fjx/4 Crack. It does a body good. +1 813 790 7592 jra@baylink.com http://rc5.distributed.net NIC: jra3
locality makes little difference in overall traffic patterns.
How do you say "bullshit" in Russian?
Chush
C'mon, Vadim. As the Net, and the Web in particular, grow more geographically dense -- IE: as there _is_ more local stuff for users to look at -- they _will_; people are natively more interested in that which is near to them geographically.
I think you are forgetting that there always will be companies that provide content that will be the reason for non-local traffic to continue to dominate local (porn webfarms come to mind) Alex
On Wed, Sep 17, 1997 at 04:28:26PM -0400, Alex "Mr. Worf" Yuriev wrote:
locality makes little difference in overall traffic patterns. How do you say "bullshit" in Russian? Chush
Thanks. "Chush."
C'mon, Vadim. As the Net, and the Web in particular, grow more geographically dense -- IE: as there _is_ more local stuff for users to look at -- they _will_; people are natively more interested in that which is near to them geographically.
I think you are forgetting that there always will be companies that provide content that will be the reason for non-local traffic to continue to dominate local (porn webfarms come to mind)
Nope; didn't forget it at all. But they will become a smaller and smaller percentage of the traffic, over time, and they're not pertinent to my argument, anyway, which was: local trafic should be _local_; it shoudn't have to go via Timbuktu. Cheers, -- jra -- Jay R. Ashworth jra@baylink.com Member of the Technical Staff Unsolicited Commercial Emailers Sued The Suncoast Freenet "People propose, science studies, technology Tampa Bay, Florida conforms." -- Dr. Don Norman +1 813 790 7592
Jay R. Ashworth wrote:
Georgaphically local, not topologically.
Precisely.
A *big* difference.
Unless we're willing to go back to regulated monopolies geographical locality makes little difference in overall traffic patterns.
How do you say "bullshit" in Russian?
Thank you, i know how to tell "Get A Clue" in English.
C'mon, Vadim. As the Net, and the Web in particular, grow more geographically dense -- IE: as there _is_ more local stuff for users to look at -- they _will_; people are natively more interested in that which is near to them geographically.
And unless we unload that traffic from the backbones and the NAP's, _it_ will be what melts down the net.
There's at least one known way to build a network which has no capacity problems at NAPs and does not break down routing scalability. Hint: it does _not_ involve zillions of local exchanges. Rather, it scales up capacity of relatively few exchange points. BTW, if you can explain how to build a network which is: a) does not involve geographical/administrative monopolies b) can exploit natural geographical locality and c) won't die because of routing scalability problems we all will be very interested to learn. To my knowledge, nobody figured that out yet. --vadim
On Wed, Sep 17, 1997 at 02:34:56PM -0700, Vadim Antonov wrote:
There's at least one known way to build a network which has no capacity problems at NAPs and does not break down routing scalability.
Charge 4x what people are charging now. --Eric -- Eric Wieling (eric@ccti.net), Corporate Communications Technology Sales: 504-585-7303 (sales@ccti.net), Support: 504-525-5449 (support@ccti.net) A BellSouth Communications Specialist. No, I don't work for BellSouth, I'm just on the phone with them so much that I'm an expert at getting them to do things.
y On Wed, 17 Sep 1997, Eric Wieling wrote:
On Wed, Sep 17, 1997 at 02:34:56PM -0700, Vadim Antonov wrote:
There's at least one known way to build a network which has no capacity problems at NAPs and does not break down routing scalability.
Charge 4x what people are charging now.
Wrong.... Marc Hurst SKYSCAPE....
--Eric -- Eric Wieling (eric@ccti.net), Corporate Communications Technology Sales: 504-585-7303 (sales@ccti.net), Support: 504-525-5449 (support@ccti.net)
A BellSouth Communications Specialist. No, I don't work for BellSouth, I'm just on the phone with them so much that I'm an expert at getting them to do things.
Eric Wieling wrote:
On Wed, Sep 17, 1997 at 02:34:56PM -0700, Vadim Antonov wrote:
There's at least one known way to build a network which has no capacity problems at NAPs and does not break down routing scalability.
Charge 4x what people are charging now.
Not at all. Actually, from what i've heard (and i talked to a lot of very senior people in emerging and established carriers) we're going to see a rather dramatic fall in cost of long-haul bandwidth. There are two reasons for that: the first is WDM, and the second is the emergence of datacomm-aware thinking in capacity planning departments. After all, it does not cost a lot more to put 100 strands in the same trench than only 6. The NAP capacity problem does not have anything to do with money. Simply, growth along the Moore's Law is too slow for the Internet. --vadim
On Wed, 17 Sep 1997, Jay R. Ashworth wrote:
C'mon, Vadim. As the Net, and the Web in particular, grow more geographically dense -- IE: as there _is_ more local stuff for users to look at -- they _will_; people are natively more interested in that which is near to them geographically.
And unless we unload that traffic from the backbones and the NAP's, _it_ will be what melts down the net.
Well, a better way to handle that particular problem is simply to peer locally with other ISPs. We do that here in Huntsville, peering with 3 of the other ISPs in town, and between the 4 of us we probably have 85% of the local customers. The peering links are over ATM (for one who is a customer) or FR (so they are essentially free given the way BellSouth's FR network is designed). We would be willing to peer with the smaller ones as long as they were willing to run BGP4, which they find difficult on the equivalent of a Cisco 2501 :). In our other cities, we are trying to arrange similar peering connections. John Tamplin Traveller Information Services jat@Traveller.COM 2104 West Ferry Way 205/883-4233x7007 Huntsville, AL 35801
On Wed, 17 Sep 1997, Jay R. Ashworth wrote:
C'mon, Vadim. As the Net, and the Web in particular, grow more geographically dense -- IE: as there _is_ more local stuff for users to look at -- they _will_; people are natively more interested in that which is near to them geographically.
And unless we unload that traffic from the backbones and the NAP's, _it_ will be what melts down the net.
Perhaps, but I don't believe it. See, the really neat thing about the 'net is it *removes* the geographical locality as a barrier. People have interests, very specific interests. The number of people interested in following alt.barney.die.die.die are geographically dispersed, but the Internet brings them together in a virtual community. Search engines, as primitive as they are now, make it much easier to find whatever specific item you're looking for, and odds are overwhelming that it's not on your neighbors server. I'm sure I'm not the only one who is much more interested in the national news and politics than local. And we've not really begun to explore the "virtual corporation" to see how spread out pieces of a logical entity are when geography is removed as a barrier. --- David Miller ---------------------------------------------------------------------------- It's *amazing* what one can accomplish when one doesn't know what one can't do!
At 12:44 PM 9/17/97 -0700, Vadim Antonov wrote:
Kent W. England wrote:
At that point a pizza parlor owner says to himself "two out of every five of my customers are on the Internet. Perhaps I need a web page." And, suddenly, pizza on the Net makes a lot of sense and the traffic patterns shift. As the density grows to 90%, local traffic becomes dominant over distant traffic.
Georgaphically local, not topologically.
A *big* difference.
Unless we're willing to go back to regulated monopolies geographical locality makes little difference in overall traffic patterns.
--vadim
Not true, it is when geographical locality of traffic becomes significant (lets say 10 percent of the traffic originating in a city is destined for the same city, or even 5 percent, or maybe even 2 percent), that it makes sense to make a very very strong push into many more local exchanges. I see this eventuality as inevitable, and as such believe that encouraging local exchanges to be of prime importance to our ability to route traffic for our customers both inexpensively and quickly. ************************************************************** Justin W. Newton voice: +1-650-482-2840 Senior Network Architect fax: +1-650-482-2844 PRIORI NETWORKS, INC. http://www.priori.net Legislative and Policy Director, ISP/C http://www.ispc.org "The People You Know. The People You Trust." **************************************************************
At 12:44 PM 9/17/97 -0700, Vadim Antonov wrote:
Kent W. England wrote:
At that point a pizza parlor owner says to himself "two out of every five of my customers are on the Internet. Perhaps I need a web page." And, suddenly, pizza on the Net makes a lot of sense and the traffic patterns shift. As the density grows to 90%, local traffic becomes dominant over distant traffic.
Georgaphically local, not topologically.
A *big* difference.
Unless we're willing to go back to regulated monopolies geographical locality makes little difference in overall traffic patterns.
--vadim
Not true, it is when geographical locality of traffic becomes significant (lets say 10 percent of the traffic originating in a city is destined for the same city, or even 5 percent, or maybe even 2 percent), that it makes sense to make a very very strong push into many more local exchanges. I see this eventuality as inevitable, and as such believe that encouraging local exchanges to be of prime importance to our ability to route traffic for our customers both inexpensively and quickly.
Justin W. Newton
I agree that geographical locality of traffic is important, but a majority of the local traffic won't be going through these exchanges until the big backbones compromise on their peering policies, and exchange "local pop" sets of routes in peering sessions. I think we can prevent people from pointing their default routes to these interfaces by enthusiastic application of spiked LARTs. --------------------------------------------------------------------------- Andrew W. Smith ** awsmith@neosoft.com ** Network Engineer ** 1-888-NEOSOFT ** "Opportunities multiply as they are seized" - Sun Tzu ** ** http://www.neosoft.com/neosoft/staff/andrew ** ---------------------------------------------------------------------------
In message <3.0.2.32.19970919124946.00a52140@priori.net>, "Justin W. Newton" wr ites:
At 12:44 PM 9/17/97 -0700, Vadim Antonov wrote:
Kent W. England wrote:
At that point a pizza parlor owner says to himself "two out of every five of my customers are on the Internet. Perhaps I need a web page." And, suddenly, pizza on the Net makes a lot of sense and the traffic patterns shift. As the density grows to 90%, local traffic becomes dominant over distant traffic.
Georgaphically local, not topologically.
A *big* difference.
Unless we're willing to go back to regulated monopolies geographical locality makes little difference in overall traffic patterns.
--vadim
Not true, it is when geographical locality of traffic becomes significant (lets say 10 percent of the traffic originating in a city is destined for the same city, or even 5 percent, or maybe even 2 percent), that it makes sense to make a very very strong push into many more local exchanges. I see this eventuality as inevitable, and as such believe that encouraging local exchanges to be of prime importance to our ability to route traffic for our customers both inexpensively and quickly.
In four months our Austin Exchange point, with about 3/5s of the local ISPs of significance connected, is exchanging about 400kbps average. Considering most of these providers are using fractional DS3s, the 10% range is a comfortable number. By the end of the year we (Texas ISPs) expect to have non MFS exchange points in all the major texas markets. National backbones are welcome, if they can figure out the solution to first exit routing, and network locality, that is acceptable to them. The only real issue for national backbones is legacy configuration issues, and route aggregation/network locality issues, with regard to address assignments. For newer networks this should be a no brainer to solve, (although it does inolve some more bit of work in forcasting growth). --- Jeremy Porter, Freeside Communications, Inc. jerry@fc.net PO BOX 80315 Austin, Tx 78708 | 1-800-968-8750 | 512-458-9810 http://www.fc.net
Kent W. England writes:
Here are some examples:
2. Identify which % of traffic, if any, has regional locality. For pure Internet traffic, the probability that the source and destinatino of traffic are within the same metropolitan area tends to be low (10% or lower for metros within the US).
This is true only so long as the density of the Internet is low. This is so because so long as the density is low, few of your neighbors will be on the Internet and therefore local issues are irrelevant. However, at some point, the density of the Internet gets to a critical point, say 30% to 40%. At that point a pizza parlor owner says to himself "two out of every five of my customers are on the Internet. Perhaps I need a web page." And, suddenly, pizza on the Net makes a lot of sense and the traffic patterns shift. As the density grows to 90%, local traffic becomes dominant over distant traffic.
Even in the scenario where physical proximity automatically implied network proximity, I think the assumption that local traffic will dominate communications needs to be revisited. It is true today, only because that is how people live lives and conduct business _today_. The concept of "community" today is geographical.. the communities of tommorrow may not be so restricted.
Another example is distributed web hosting. When distributed web hosting takes off, your backbone will be heavily discounted and your peripheral interconnect bandwidth will be woefully short. Web traffic will zoom as performance dramatically improves, but your backbone bandwidth will drop. That breaks your traffic model.
This is true of a business model based around content distrubution only. Most ISPs of size will have both publishers and consumers of information so the backbones utilization should be balanced.
So, by all means, do your traffic studies, but be prepared to throw them out or re-write them when the environment changes. Then throw bandwidth where it will do the most good. :-)
No debate here.
--Kent
--pushpendra Pushpendra Mohta pushp@cerf.net +1 619 812 3908 TCG CERFnet http://www.cerf.net +1 619 812 3995 (FAX)
At 04:23 PM 9/17/97 -0700, Pushpendra Mohta wrote:
Even in the scenario where physical proximity automatically implied network proximity, I think the assumption that local traffic will dominate communications needs to be revisited. It is true today, only because that is how people live lives and conduct business _today_. The concept of "community" today is geographical.. the communities of tommorrow may not be so restricted.
I'm not at all convinced that 'local' traffic stays 'local', in fact, I'd suspect that the latter case which you mention is already true. I'd very much like to see the ration of traffic which is 'pushed' to that which is 'pulled' from the local exchange, especially at smaller exchanges (e.g. Tucson, Packet Clearing House) to verify these assumptions. Not sure enough solid data can be correlated at the larger exchange points to provide a conclusion. - paul
well if I may stick my two cents in from a rainy russian sankt peterburg, I disagree. Sure the majority of my traffic is not local and I do business with my newsletter all over the world/ But I will not be entirely happy with the internet until i have a locally usable conncetion to my township city hall where I and my fellow citizens can debate local politics. I want the same connection to my local school board and internet using teachers. the same to my county commissioners and my state legislature. I want email to the local library and use of its web site. I want access to local transportation schedules and local businesses and restaurant menus. and yes maybe even to the local pizza parlor. as long as we live in **physical places** and pay taxes to local governments, the internet will not make local geography entirely irrelevant. I cannot predict exact percentages but I know with certainty there is a LOT of local communication the *I* would like to do that I cannot. Given an increase in the density of local users local traffic will surely increase. *********************************************************************** The COOK Report on Internet For subsc. pricing & more than 431 Greenway Ave, Ewing, NJ 08618 USA ten megabytes of free material (609) 882-2572 (phone & fax) visit http://cookreport.com/ Internet: cook@cookreport.com New Special Report: Internet Governance at the Crossroads ($175) http://cookreport.com/inetgov.shtml ************************************************************************ On Wed, 17 Sep 1997, Paul Ferguson wrote:
At 04:23 PM 9/17/97 -0700, Pushpendra Mohta wrote:
Even in the scenario where physical proximity automatically implied network proximity, I think the assumption that local traffic will dominate communications needs to be revisited. It is true today, only because that is how people live lives and conduct business _today_. The concept of "community" today is geographical.. the communities of tommorrow may not be so restricted.
I'm not at all convinced that 'local' traffic stays 'local', in fact, I'd suspect that the latter case which you mention is already true.
I'd very much like to see the ration of traffic which is 'pushed' to that which is 'pulled' from the local exchange, especially at smaller exchanges (e.g. Tucson, Packet Clearing House) to verify these assumptions. Not sure enough solid data can be correlated at the larger exchange points to provide a conclusion.
- paul
At 08:04 AM 9/18/97 -0400, Gordon Cook wrote:
well if I may stick my two cents in from a rainy russian sankt peterburg, I disagree. Sure the majority of my traffic is not local and I do business with my newsletter all over the world/ ...
Gordon, you're the exception that proves every rule. Cheers. --Kent
At 04:23 PM 9/17/97 -0700, Pushpendra Mohta wrote:
Even in the scenario where physical proximity automatically implied network proximity, I think the assumption that local traffic will dominate communications needs to be revisited. It is true today, only because that is how people live lives and conduct business _today_. The concept of "community" today is geographical.. the communities of tommorrow may not be so restricted.
True, it's an assumption, but as I said in another message, the only other example we have of such a network is the telephone network. And, given the choice, why wouldn't most people join a local community rather than a far-away or abstract community? But there is not much point in arguing about this -- let's just keep our eyes on the traffic patterns and see what happens and adjust accordingly.
Another example is distributed web hosting. When distributed web hosting takes off, your backbone will be heavily discounted and your peripheral interconnect bandwidth will be woefully short. Web traffic will zoom as performance dramatically improves, but your backbone bandwidth will drop. That breaks your traffic model.
This is true of a business model based around content distrubution only. Most ISPs of size will have both publishers and consumers of information so the backbones utilization should be balanced.
I see a lot of asymmetries today. Some service providers have a lot of business access connections, some have mostly web hosting, and some have mostly retail eyeballs. Of course, CERFnet may be better able to balance than most, but I expect you'll support whatever sells, whether it balances or not. :-) Cheers. --Kent
participants (19)
-
Alex "Mr. Worf" Yuriev
-
Andrew Smith
-
Christian Nielsen
-
David Lesher
-
david@sparks.net
-
Eric Wieling
-
Gordon Cook
-
Jay R. Ashworth
-
Jeremy Porter
-
John A. Tamplin
-
Josh Beck
-
Justin W. Newton
-
Kent W. England
-
Marc Hurst
-
Michael Dillon
-
Paul Ferguson
-
pkavi@pcmail.casc.com
-
Pushpendra Mohta
-
Vadim Antonov