John Scudder then discussed some modeling he and Sue Hares have done on the projected load at the NAPs. The basic conclusions are that the FDDI technology (at Sprint) will be saturated sometime next year and that load-balancing strategies among NSPs across the NAPS is imperative for the long term viability of the new architecture. John also expressed concern over the lack of expressed policy for the collection of statistical data by the NAP operators. All of the NAP operator are present and stated that they will collect data, but that there are serious and open questions concerning the privacy of that data and how to publish it appropriately. John said that collecting the data was most important. Without the data, there is no source information from which publication become possible. He said that MERIT/NSFNET had already tackled these issues. Maybe the NAP operators can use this previous work as a model to develop their own policies for publication.
Merit/NSFNet already tackled these issues in an insufficent and unopen manner. MarkFedor/ColeLibby from PSI said there was a "quiet" admission that the old methodology was already "approved" for the SPRINT NAP. Marty
There was no "quiet" admission and no "approval". This is still very much an open question. However, Sprint (atleast) will not sit by and do nothing. We have created potential choke points in the Internet architecture and it would be irresponsible of us (as a community) to not do anything about it. Given the fact that .... a) NAP operators need traffic numbers (preferably a NSP-NSP matrix but atleast aggregate counts at small time granularities) to ensure capacity keeps ahead of demand and b) there is a *legitimate* concern about security of data collection mechanisms and privacy, availability of collected data, The questions is "How can we serve both interests?" I think one scalable solution is for NSPs, which presumably collect data for their own requirements, to report on traffic traversing NAPs. Compiling statistics across all NSPs at a cross connect point could provide us with an accurate picture of traffic levels. NAP providers must be held responsible for ensuring that data is protected, even from sister organizations (such as Ameritech v/s AADS IP service, or Sprint v/s SprintLink etc.) Of course, if you don't trust NAP providers themselves, we have a slightly bigger problem :)- ATM NAPs today will not allow a central collection point for network layer stats (per PVC cell counts are presumably available). The Sprint NAP will be similarly constrained when we move to a central FDDI switch. What is the right threshold between data collection and privacy/security ? -- Bilal
John Scudder then discussed some modeling he and Sue Hares have done on the projected load at the NAPs. The basic conclusions are that the FDDI technology (at Sprint) will be saturated sometime next year and that load-balancing strategies among NSPs across the NAPS is imperative for the long term viability of the new architecture. John also expressed concern over the lack of expressed policy for the collection of statistical data by the NAP operators. All of the NAP operator are present and stated that they will collect data, but that there are serious and open questions concerning the privacy of that data and how to publish it appropriately. John said that collecting the data was most important. Without the data, there is no source information from which publication become possible. He said that MERIT/NSFNET had already tackled these issues. Maybe the NAP operators can use this previous work as a model to develop their own policies for publication.
Merit/NSFNet already tackled these issues in an insufficent and unopen manner.
MarkFedor/ColeLibby from PSI said there was a "quiet" admission that the old methodology was already "approved" for the SPRINT NAP.
Marty
I am curious to know what you think is "correct" instrumentation. Clearly in simple cases, capacity planing can be done with link statistics. But it is not possible to compare the effective (useful?) capacities of two different topologies without some form of traffic matrix. I have observed real situations where "better" topologies turned out to be worse, due to unanticipated large traffic streams between sites. Even a simple task, like inserting a bridge to raise the performance of an overloaded ethernet quickly degenerates into an exercises in combinatorics unless guided by a mac layer traffic matrix. I take it as a given that the Internet needs pro-active topology management encompassing the upper level ISP's and interconnects. Given that, what statistics and procedures would you suggest? --MM-- P.S. I understand that there are business reasons for customers to want to hide their traffic. However, a full 20k route by 20k route traffic matrix is about 2 GB per sample interval, which is pretty hard to inhale. The best possible data collection must be pretty sparse, due to CIDR or other aggregation, coarse time scales and limited visibility. This really seems like the hard way to snoop somebody else's business. Unless you are aware of examples..... --MM--
Marty, I think it would be more productive to: 1) clearly express your concerns (without the innuendo and quotation marks when you are not quoting, and drop the attempts to shade the discussion as if collection of traffic statistics is being done for bad purposes) 2) identify mechanisms which meet your concerns and at the same time allow for data to be used for traffic engineering and observation of trends in the Internet. (I know that you will try to twist this last phrase, but I don't want to spend the time to make it inductively correct and un-twistable ..., so when you quote me please be sure to include this disclaimer) My bet is that there is a simple, and happy common ground that could be found wrt these issues. It will require you and others to sit down and build a consensus for a data collection framework. Rather than sniping why don't you help bring this consensus about?
Merit/NSFNet already tackled these issues in an insufficent and unopen manner.
I believe Merit's process of data collection conforms to the only document in the Internet RFC series that discusses policies for traffic collection. With your current line of logic it would appear your conclusions on an unopen process are fallacious. If you want a different policy, then I suggest you build a consensus on what a better policy would be. TO be a "sufficient" policy one can not expect Merit or any small subset of the community to posit a replacement of an established policy, there needs to be broad participation. My sense is that the Sprint NAP policy is based on discussions and input that Sprint has gained from a wide variety of parties including a broad ranging discussion at the NANOG held in Ann Arbor early in SUmmer 1994. Perhaps you would care to ground your discussion of Sprint's policies on facts and perhaps even discuss specifics? Marty, You can do better. cheers, peter
participants (4)
-
Bilal Chinoy
-
Martin L. Schoffstall
-
Matt Mathis
-
Peter S. Ford