Draft minutes for IETF Network Status Reports. - please comment.
GREAT THANKS to Marsha Perrott for chairing the meeting in my absence and Rob Reschly for the notes. There are a couple places where the note-taker was not sure of the details, please corroborate places with (?). Thanks, Gene
Minutes of the Toronto IETF'30 Netstat Working Group ================================================================ submitted to perrott@prep.net submitted by reschly@arl.mil
================ CoREN: Scott Bradner
The status of the CoREN Network Services Request for Proposals (RFP) process was briefed. Scott emphasized one key feature of this RFP: it will result in a contract to provide services to the regionals, not in a contract to build a backbone to interconnect regionals. Since they are buying a service, CoREN expects to be one customer among many using the same service.
CoREN does not want to have to rely on the NAPs for everything. CoREN feels NAPs and RAs are a good idea, but....
Scott observed that dollars flow from the NSF to the Regionals to fully connected network service providers (NSPs) to the NAPs. The only NSPs eligible to provide connectivity paid for by NSF funding are those which connect to the all primary NAPs (NY, IL, CA).
The CoREN provider will establish connectivity to all primary NAPs, MAE-East, and the CIX.
Scott was asked about planned NOC responsibilities: NOC integration and coordination is being worked on. Discussion points are relative responsibilities, e.g. NEARnet vs CoREN provider hand-off.
When asked for information on non-CoREN American provider plans, Scott knew of at least two providers who will be at other NAPS. Scott indicated MCI will be at the Sprint NAP soon. Others later.
As for the CoREN RFP evaluation, more than one of proposals was pretty close from a technical perspective, and they were close financially. The selected provider came out ahead in both measurements and additionally offered to support a joint technical committee to provide a forum for working issues as they arise. In particular, early efforts will focus on quantifying QOS issues as they were intentionally left out of the specification so they can be negotiated as needed (initially and as the technology changes).
The circuits are coming in and routers (Cisco 7000s) are being installed in the vendor's PoPs this week. First bits will be flowing by 1 August. Line and router loading and abuse testing is expected to commence by 15 August, and production testing is should be underway by 15 September. Cutover is expected before 31 October.
Someone noted there may be some sort of problem related to route cache flushing in the current Cisco code which could impact deployment.
================ NC-REN (formerly CONCERT): Tim Seaver
CONCERT is a statewide video and data network operated by MCNC. - primary funding from State of NC - currently 111 direct, 32 dialup, and 52 uucp connections - 30K+ hosts - 4.5Mbps inverse multiplexed 3xDS1 link to ANS pop in Greensboro, NC
Replaced by NC-REN - expands to North Carolina Research and Education Network - DNS name is changing from concert.net to ncren.net
Service changes: - dropping commercial services - concentrating on R&E - focus on user help
Main reason for name change: - British Telecomm and MCI wanted the CONCERT name. MCNC never registered CONCERT.
In return MCNC management wanted: - NC service community more prominent - alignment with NREN - emphasis on R&E
Press release 15 April Conversion to ncren.net in progress - Domain registered February 1994 - Local changes simple but time-consuming - Remote changes hard and time consuming - Targeting 1 October completion fairly sure of conversion by 31 October - Decommission CONCERT by 1 January 1995
Existing service problems: - Help desk overloaded from dialup UNIX shell accounts - Commercial providers springing up everywhere - The Umstead Act (a NC state law) says state funds cannot subsidize competition with commercial services. - CONCERT had sufficient non-governmental funding to cover commercial services, but accounting practices could not prove separation so they just decided to just stop.
Service changes - Turned over dialup UNIX shell connectivity to Interpath March 1994 - Planning to stop providing commercial IP and UUCP services by October 1994 - Planning to stop providing commercial direct services by 1 January 1995 - Will continue direct connects, IP, UUCP for government, research and education customers.
Plans: - Pursuing new R&E customers: Remaining private colleges Community colleges K-12 schools State and local government Libraries (?) - Providing security services: firewalls, Kerberos, PEM, secure DNS, secure routing. - Expanding information services: m-bone, NC state government documents, WWW services, and consultation -- to provide more access - Internet connection will be upgraded to 45Mbps October, 1994 - Work on a NC Information Highway (NCIH)
In response to a question about NC microwave trunking he noted that the Research Triangle Park area is at 45Mbps and remote areas are at 25Mbps.
In passing he noted ATM interaction with research community is an interesting opportunity, indicating Southern bell GTE and Carolina telephone working ATM infrastructure
In response to a question about the number of sites changing to NC-REN he stated there were about 20 R&E direct connections which would move, and that the narrowed focus of the NC-REN would not change the cash flow model significantly.
================ "Transition from NSFnet Backbone to the NAPland": Sue Hares
Available via WWW at URL: http://rrdb.merit.edu
If mid-level networks want to send Sue information concerning any aspects of plans to transition, please do. Also indicate what can be published (this second permission is hard) -- Sue will respect confidentiality requirements. They desperately need information about local and regional plans so they can manage the transition for NSF.
NOTE: The following is incomplete because Sue went through it very quickly. However, as a teaser if nothing else, some of the information on the slides available at the above URL is included below, as well as most of the significant discussion....
NAP online Dates: Sprint NAP 11 August PacBell mid-September Ameritech 26 September
Currently scheduled NSFnet service turn-down. Note this does not say anything about tangible infrastructure changes, only NSFnet service plans. That is, NSF says they intend to stop paying for the forwarding of traffic via the indicated ENSSs, no more, no less:
Category 1 CoREN (numbers are ENSSs): (first round) BARRnet 128 SURAnet 138 136 SESQUInet 139 MIDnet 143 CICnet 130 129 131 NYSERnet 133 NEARnet 134 NWnet ??? Sue missed this one on her slide
In conversation it was reported that PREPnet is not to use PSC connection for access after 1 October.
The real message is that these and following numbers are "official notification" for management planning. It was recommended to "flick the lights" before actual turn-off -- i.e. install the replacement connectivity and turn off the NSFnet connection to see what breaks.
Again Sue pleaded for information as it becomes available and permission to announce it as soon as possible.
Category 2 Regional ENSSs Argonne 130 PREPnet 132 CA*net 133 143 137 ALTERnet 134 136 PSI 136 133 JvNCnet 137 THEnet 139
Category 3 Regional ENSSs MICHnet 131
NOTE: More complete information concerning the above is available online.
Sue reiterated that the "decommissionings" are simply organization's status as recipient of NSFnet services. It would be a good idea for each affected organization to talk to any or all service providers between the organization and the NSFnet for details about other aspects of the connection.
As for the differences between between the categories; category 1 is primarily CoREN, category 2 is the other regionals, and category 3 includes supercomputer sites and less firmly planned sites.
More information welcomed: Anyone got a contract from NSF? Anyone want to tell Sue their NSP? Got some private announcements, need more.
Want information to forward to NSF even if not public. Will respect privacy, but important to inform NSF even if caveated by "may change because..."...
When asked about the time-lines for the various categories, it was stated that NSF wants to have the category 1 sites switched off the NSFnet by 31 October. Beyond that, it is currently phrased as a best effort task.
There was some discussion about CoREN test and transition plans: Note that load and trans-NAP plans are still being worked. There appears to be significant concern about not taking any backwards steps.
One proposed working bilateral testing agreement. This provoked discussion of a tool called offnet (?) (and some nice tools Hans-Werner Braun has written). Some or all of these tools will be made available by Merit, however it was stress that use by the regionals is intended to instrument local sites, and cannot Merit allow additional to connections NSFnet backbone monitoring points.
================ NSFnet statistics: Guy Almes
Traffic is still doubling! Traffic topped 70 Gigapackets per month in May and June.
Guy noted that December 94 chart will be interesting -- how to measure, and what makes sense to measure, new in backboneless regime. There will be a transition from traffic into backbone to traffic into multiple whatevers. Should any resulting numbers be counted? It was observed that it would be hard to avoid double counting in such an environment.
The general consensus was that there is a need to pick an appropriate set of collection points: e.g. transition from BARRnet to/from NSF to BARRnet to/from CoREN provider.
One position contends that we really want customer to BARRnet data rather than BARRnet to CoREN provider. However it was observed that this is not tractable or trackable.
Other statistics show: 952 Aggregates currently configured in AS690 751 announced to AS690 6081 class based addresses represented
There were two additional slides depicting: 1)IBGP stability: solid line is percentage of IBGP sessions which have transitions during the measurement intervals, and 2) Eternal route stability: solid line is external peers.
Data collection is once again in place on backbone and has been operational since 1 June.
In conversation, it was noted that the Route Servers will be gathering statistics from the NAPs. The Route Servers will be gated engines and will be located at the NAPs
UPDATES: ANS router software activity Software enhancements: RS960 buffering and queueing microcode updated
- increased number of buffers, also went from max MTU sized buffers to 2+kB chainable buffers (max FDDI will fit in two buffers with room to spare.
- dynamic buffer allocation within card
-- two together really improve dynamic burst performance
Design for improved end-to-end performance
- Based on Van Jacobson and Floyd random early drop work.
- End-to-end performance is limited by bandwidth delay product
- current protocols deal gracefully with a single packet drop but multiple packets dropped push algorithm into slow start. With "current" van Jacobson code, even brief congestion in the path will cause things to back off under even low end loadings.
Work shows that Random Early Drop slows things just enough to avoid congestion without putting particular flows into slow-start.
In passing, Guy noted that he figures the speed of light as roughly 125 mi/ms on general phone company stuff.
The conditions and results were summarized on two slides:
+ Single flow Van Jacobson random early drop:
41Mbps at 384k MTU cross-country (PSC to SDSC?)
This code (V4.20L++) is likely to be deployed in a month or so.
By way of comparison Maui Supercomputer center to SDSC was 31Mbps using an earlier version of code with 35 buffers. Windowed ping with the same code did 41Mbps.
+ Four flow Van Jacobson random early drop:
42Mbps at 96kB MTU.
All the numbers are with full forwarding tables in the RS960s
In other news...: + SLSP support for broadcast media completed + Eliminated fake AS requirement for multiply connected peers. + Implemented IBGP server. ...
Pensalken (the SPRINT NAP) is a FDDI in a box.
================ CA*net: Eric Carroll
All but three backbone links are now at T1 and there are dual T1s to each US interconnect.
Pulled in Canadian government networks. Using Ciscos to build network.
Still seeing 8-10x US costs for service. CA*net will grow to DS3 when can get and afford (!).
Numbers on map slide are percentage utilization. Note that 12 routers were installed between mid-March and the end of April and these are early numbers. Note that the British Columbia to NWnet link T1 went to saturation in 5 hours. Appears to be due to pent up demand, not particular users or programs.
7010 roll-out had a lot of support from Cisco. Ran into some problems with FT1 lines in queuing discipline.
Still doing NNSTAT on an RT for now, but working with a RMON vendor to get stuff for new implementation.
When asked about using inverse multiplexors for increased bandwidth, Eric indicated CA*net was currently just using Cisco's load sharing to US, however they would be considered when needed.
A question was raised about CA*net connectivity plans in light of the impending NSF transition. Currently international connectivity is just to US, specifically to the US R&E community. There is some interest and discussions for other international connectivity, but cost and other factors are an issue.
CA*net hopes to place NSF connectivity order by next week.
Biggest concern is the risk of becoming disconnected from what Eric termed the R&E affinity group.
CA*net currently carries ~1000 registered ~900 active networks in CA*net.
CA*net is not AUP free, instead it is based on a transitive AUP "consenting adults" model. If two Canadian regionals or providers agree to exchange a particular kind of traffic then CA*net has no problem.
CA*net just joined CIX which prompted a question as to whether Onet is a CIX member. In response Eric characterized CA*net as a cooperative transit backbone for regional members. Therefore CA*net joining CIX is somewhat meaningless in and of itself, and, by implication, is only meaningful in the context of the regionals and providers interacting via CA*net.
In response to another question, Eric indicated that CA*net is still seeing growth.
================ MAE-East Evolution: Andrew Partan
(MAE == Metropolitan Area Ethernet)
Andrew volunteered to conduct an impromptu discussion of MAE-EAST plans
There is an effort underway to install a FDDI ring at the MFS Gallows Rd PoP and connect that ring to MAE-East using a Cisco Catalyst box.
MAE-East folks are experimenting with GDC Switches
Is there a transition from MAE-East to the SWAB?: Unknown
(SWAB == SMDS Washington [DC] Area Backbone)
MFS DC NAP is proposing to implement using NetEdge equipment.
Any MAE-East plans to connect to MFS NAP?: Unknown.
ALTERnet is currently using a Cisco Catalyst box and is happy.
Time-frame for implementing MAE-East FDDI?: Not yet, still need management approval. Hope to have a start in next several weeks..
Those interested in MAE-EAST goings-on and discussions with members should join the mailing list MAE-East[-request]@uunet.uu.net
For what it may be worth, they "had to interrupt MAE-LINK for 5 seconds this week to attach an MCI connection".
In summary (to a question) one would contract with MFS for connectivity to MAE-East. Then one would need to individually negotiate pairwise arrangements with other providers with which there was an interest in passing traffic. As far is as known there are no settlements currently, but cannot say for sure.
================ Random Bits:
SWAB (SMDS Washington Area Backbone): In response to point of confusion, it was stated that the SWAB bilateral agreement template is just a sample, not a requirement
CIX: The CIX router is getting a T3 SMDS connection into the PacBell fabric. ALTERnet and PSI are doing so too. CERFnet currently is on.
Noted in passing: Each SMDS access point can be used privately, to support customers, to enhance backbone, etc.... This could have serious implications for other provider agreements.
CERFnet: Pushpendra Mohta (? --not at the meeting) is reported to be happy, but the group understood that most CERFnet CIRs are at 4Mbps over T3 entrance facilities. PacBell was reportedly running two 2OOMbps (is this the really correct, seems rather low?) backplane capacity switches interconnected with single T3. Planning to increase provisioning -- already have a lot of demand.
Please indicate that anyone wanting the offnet code should contact me (skh@merit.edu) and also copy merit-ie@merit.edu on the request. Steve R. from MERIT made the corrections on the ENSS. There is a http:/rrdb.merit.edu/home.html that will give pointers to NAP information as we can pull it in and put it up. Sue Hares
================ Random Bits:
SWAB (SMDS Washington Area Backbone): In response to point of confusion, it was stated that the SWAB bilateral agreement template is just a sample, not a requirement
CIX: The CIX router is getting a T3 SMDS connection into the PacBell fabric. ALTERnet and PSI are doing so too. CERFnet currently is on.
Noted in passing: Each SMDS access point can be used privately, to support customers, to enhance backbone, etc.... This could have serious implications for other provider agreements.
CERFnet: Pushpendra Mohta (? --not at the meeting) is reported to be happy, but the group understood that most CERFnet CIRs are at 4Mbps over T3 entrance facilities. PacBell was reportedly running two 2OOMbps (is this the really correct, seems rather low?) backplane capacity switches interconnected with single T3. Planning to increase provisioning -- already have a lot of demand.
Pacbell operates two switched in the Bay Area. One in San Francisco and one in Santa Clara. The former is practically full , and the latter is brand new. All new T3 orders will end up on the Santa Clara switch. It is true that the backplane of the switch is only 200Mbps. Because the Santa Clara switch is new, the switches are interconnected by only one T3 link. However, the switches are capable of more than one T3 link and the product manager at Pacbell ( Dick Shimizu ) has assured me that enough demand would warrant a new T3 between the switches etc. Providers thinking of buying T3 level services should specify the Santa Clara switch although it should end up being used anyway. I have alerted the Product manager and he will ensure that T3 circuits are on the SC switch. A new switch is being planned for early next year, although enough demand will accelerate that deployment as well --pushpendra Pushpendra Mohta pushp@cerf.net +1 619 455 3908 Director of Engineering pushp@sdsc.bitnet +1 800 876 2373 CERFNet
================ Random Bits:
SWAB (SMDS Washington Area Backbone): In response to point of confusion, it was stated that the SWAB bilateral agreement template is just a sample, not a requirement
CIX: The CIX router is getting a T3 SMDS connection into the PacBell fabric. ALTERnet and PSI are doing so too. CERFnet currently is on.
Noted in passing: Each SMDS access point can be used privately, to support customers, to enhance backbone, etc.... This could have serious implications for other provider agreements.
CERFnet: Pushpendra Mohta (? --not at the meeting) is reported to be hap
In addition, PSI installed an SMDS switch in SantaClara several weeks ago which has a gigabit backplane. So, if there is a problem with CIX SMDS throughput there is a "net", using PtP T3's and mutliple SNI's on the CIX (PacBell) SMDS connection, remapped into another (PSI's) switch. Marty py,
but the group understood that most CERFnet CIRs are at 4Mbps over T3 entrance facilities. PacBell was reportedly running two 2OOMbps (is this the really correct, seems rather low?) backplane capacity switches interconnected with single T3. Planning to increase provisioning -- already have a lot of demand.
Pacbell operates two switched in the Bay Area. One in San Francisco and one in Santa Clara. The former is practically full , and the latter is brand new. All new T3 orders will end up on the Santa Clara switch. It is true that the backplane of the switch is only 200Mbps.
Because the Santa Clara switch is new, the switches are interconnected by only one T3 link. However, the switches are capable of more than one T3 link and the product manager at Pacbell ( Dick Shimizu ) has assured me that enough demand would warrant a new T3 between the switches etc.
Providers thinking of buying T3 level services should specify the Santa Clara switch although it should end up being used anyway. I have alerted the Product manager and he will ensure that T3 circuits are on the SC switch.
A new switch is being planned for early next year, although enough demand will accelerate that deployment as well
--pushpendra
Pushpendra Mohta pushp@cerf.net +1 619 455 3908 Director of Engineering pushp@sdsc.bitnet +1 800 876 2373 CERFNet
Gene, Sounds like a great meeting. Too bad I couldn't make it. Some extremely minor corrections and additional information. Guy presented this but I did the actual testing.
UPDATES: ANS router software activity Software enhancements: RS960 buffering and queueing microcode updated
- increased number of buffers, also went from max MTU sized buffers to 2+kB chainable buffers (max FDDI will fit in two buffers with room to spare.
We are using 2kB buffers. A FDDI packet fits into 3 buffers. The advantage is that most real world packets are still ethernet MTU or less and take up less space using the new scheme. We still got the buffering up for FDDI packets and used full FDDI MTU in testing.
[ ... ]
The conditions and results were summarized on two slides:
+ Single flow Van Jacobson random early drop:
41Mbps at 384k MTU cross-country (PSC to SDSC?)
This was on our testnet. We took NY to Ann Arbor down and went by way of Texas (MCI) giving us 68 msec RTT. NY to SF is 70 msec so it is roughly cross country equivalent. Of course we couldn't step on poor unsuspecting users in the middle of the night by congesting the net, and they couldn't provide a "realistic background load" for our testing. We'd like to see a (brief) validation of results after various steps in deployment and have support from PSC and (I think) SDSC to do this.
This code (V4.20L++) is likely to be deployed in a month or so.
It doesn't have an official name and has no firm deployment plans. A month or so would be very optimistic. Some of the changes are already deployed since the Maui testing and others will deploy soon but others (RED) have no plans (yet). We'll validate progress as this stuff gets deployed and hopefully it will all get deployed (soon).
By way of comparison Maui Supercomputer center to SDSC was 31Mbps using an earlier version of code with 35 buffers. Windowed ping with the same code did 41Mbps.
MHPCC (Maui) to SDSC is a 50 msec RTT. So we went faster on a longer RTT path.
+ Four flow Van Jacobson random early drop:
42Mbps at 96kB MTU.
All the numbers are with full forwarding tables in the RS960s
We inject full routing into the testnet but don't allow packet forwarding between testnet and production net. Curtis BTW - I gave numbers to Guy that I've since revised down slightly. I changed the way I estimate link utilization in mutiple flow tests so that I will generally be more accurate and will underestimate performance if inaccurate. It's closer to 40 Mb/s for 1 flow and 41 Mb/s for 4 flows (we did hit 42 Mb/s on 8 flows) on the above tests and those results were before we tested RED. The RED code was written just before IETF and wasn't performance tested until just after. In the enthusiasm over "we have RED" I may not have conveyed that the graphs were prior to having had a chance to test RED. Could you just quietly revise the numbers down by one in the minutes. I plan to give a detailed update on the performance testing at NANOG.
participants (5)
-
curtis@wawa.ans.net
-
Gene Hastings
-
Martin Lee Schoffstall
-
Pushpendra Mohta
-
Susan Hares