Test
Threads by month
- ----- 2025 -----
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2007 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2006 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2005 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2004 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2003 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2002 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2001 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2000 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1999 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1998 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1997 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1996 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1995 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1994 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1993 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1992 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- 54015 discussions
NANOG, October 24 & 25, 1994
(version 2)
Notes by Stan Barber <sob(a)academ.com>
Thanks to Guy Almes and Stan Borinski for their corrections and additions to
these notes.
[Please note that any errors are mine, and I'd appreciate corrections being
forwarded to me. The first version of this document is now available at the
following URL: http://rrdb.merit.edu/nanaog.octminutes.html]
Elise Gerich opened the meeting with Merit's current understanding of the
state of the transition. THENET , CERFNET and MICHNET have expressed
specific dates for transition. The current NSFNET contract with Merit will
terminate on April 30, 1995.
John Scudder then discussed some modeling he and Sue Hares have done on the
projected load at the NAPs. The basic conclusions are that the FDDI
technology (at Sprint) will be saturated sometime next year and that
load-balancing strategies among NSPs across the NAPS is imperative for the
long term viability of the new architecture. John also expressed concern
over the lack of expressed policy for the collection of statistical data by
the NAP operators. All of the NAP operator are present and stated that they
will collect data, but that there are serious and open questions concerning
the privacy of that data and how to publish it appropriately. John said
that collecting the data was most important. Without the data, there is no
source information from which publication become possible. He said that
MERIT/NSFNET had already tackled these issues. Maybe the NAP operators can
use this previous work as a model to develop their own policies for
publication.
After the break, Paul Vixie discussed the current status of the DNS and
BIND. Specifically, he discusses DNS security. There are two reasons why
DNS are not secure. There are two papers on this topic and they are both in
the current BIND kit. So the information is freely available.
Consider the case of telneting across the Internet and getting what appears
to be your machine's login banner. Doing a double check (host->address,
then address->host) will help eliminate this problem. hosts.equiv and
.rhosts are also sources of problems. Polluting the cache is a real
problem. Doing UDP flooding is another problem. CERT says that doing rlogin
is bad, but that does not solve the cache pollution problem.
How to defend?
1. Validate the packets returned in a response to the query. Routers should
drop UDP packets on which the source address don't match what it should be.
(e.g. a udp packet comes in on a WAN link that should have come in via an
ethernet interface). TCP is harder to spoof because of the three-way
handshake, however running all DNS queries over TCP will add too much
overhead to this process.
2. There are a number of static validations of packet format that can be
done. Adding some kind of cryptographic information to the DNS would help.
Unfortunately, this moves very slowly because there are a number of strong
conflicting opinions.
What is being done?
The current BETA of BIND has almost everything fixed that can be fixed
without a new protocol. Versions prior 4.9 are no longer supported.
Paul may rewrite this server in the future, but it will still be called
named because vendors have a hard time putting it into their releases if it
is called something else.
Paul is funded half-time by the Internet Software Consortium. Rick Adams
funds it via UUNET's non-profit side. Rick did not want to put it under
GNU.
ISC also is now running a root server and in doing this some specific
issues related to running root servers are now being addressed in fixes to
BIND.
DNS version 2 is being discussed. This is due to the limit in the size of
the udp packet. Paul M. and Paul V. are working to say something about
this at the next IETF.
HP, Sun, DEC and SGI are working with Paul to adopt the 4.9.3 BIND once it
is productional.
After this comes out, Paul will start working on other problems. One
problem is the size of BIND in core. This change will include using the
Berkeley db routing to feed this from a disk-based database.
There will also be some effort for helping doing load-balancing better and
perhaps implementing policy features.
What about service issues? Providing name service is a start.
DEC and SGI will be shipping BIND 4.9.3 will be shipping it with the next
release.
Paul has talked to Novell, but noone else....Novell has not been a helpful
from the non-Unix side.
RA Project : Merit and ISI with a subcontract with IBM
ISI does the Route Server Development and the RA Futures
Merit does the Routing Registry Databases and Network Management
The Global Routing Registry consists of the RADB, various private routing
registries, RIPE and APNIC. The RADB will be used to generate route server
configurations and potentially router configurations.
1993 -- RIPE 81
1994 -- PRIDE tools
April 1994 -- Merit Routing Registry
September 1994 -- RIPE-181
October 1994 -- RIPE-181 Software implementation
November 1994 -- NSP Policy Registrations/Route Server Configurations
Why use the RADB? Troubleshooting, Connectivity, Stability
The Route Server by ISI with IBM
They facilitate routing information exchange. They don't forward packets.
There are two at each NAP with one AS number. They provide routing
selection and distribution on behalf of clients (NSPs). [Replication of
gated single table use = view] Multiple views to support clients with
dissimilar route selection and/or distribution policies. BGP4 and BGP4 MIB
are supported. RS's AS inserted in AS path, MED is passed unmodified (this
appears controversial). Yakov said that Cisco has a hidden feature to
ignore AS_PATH and trust MED.
The Route Servers are up and running on a testbed and have been tested with
up to 8 peers and 5 views. Target ship date to 3 NAPS is October 21. The
fourth will soon follow.
The Network Management aspect of the RA project uses a Hierarchically
Distributed Network Management Model. At the NAP, only local NM Traffic,
externalizes NAP Problems, SNMPv1 and SNMPv2 are supported. OOB Access
provides seamless PPP backup & console port access. Remote debugging
environment is identical to local debugging environment.
The Centralized Network Management System at Merit polls distributed rovers
for problems, consolidates the problems into ROC (Routing Operations
Center) alert screen. It was operational on August 1st which is operated by
the University of Michigan Network Systems at the same location as the
previous NSFNET NOC. This group current provides support for MichNet and
UMNnet. It is expected to provide service to CICnet. Currently, it provides
24/7 human operator coverage.
Everything should be operational by the end of November.
Routing Futures -- Route Server decoupling packet forwarding from routing
information exchange, scalability and modularity. For example, explicit
routing will be supported (with the development of ERP). IPv6 will be
provided. Doing analysis of RRDB and define a general policy language
(backward compatible with RIPE 181). Routing policy consistency and
aggregation will be developed.
Securing the route servers -- All of the usual standard mechanisms are
being applied. Single-use passwords.... mac-layer bridges .... etc....How
do we keep the routes from getting screwed intentionally? Denial of service
attacks are possible.
A design document on the route server will be available via the
RRDB.MERIT.EDU WWW server.
There is a serious concern to synchronization of the route servers and the
routing registries. No solution has been implemented currently. Merit
believes that will do updates at least once a day.
There was mention of using the rwhois from the InterNIC as a possible way
to configure the routing DB by pooling local information.
Conversion from PRDB to RRDB
The PRDB is AS 690 specific, NACRs, twice weekly and AUP constrained.
The RADB has none of these features.
Migration will occur before April of 1995. The PRDB will be temporarily
part of the Global Routing Registry during transition.
Real soon now -- Still send NACR and it will be entered into PRDB and RRDB.
Constancy checking will be more automated. Output for AS 690 will be
compared from both to check consistency. While this is happening, users
will do what they always have. [Check ftp.ra.net for more information.]
There is alot of concern among the NANOG participants about the correctness
of all the information in the PRDB. Specifically, there appears to be some
inaccuracy (homeas) of the information. ESnet has a special concern about
this.
[Operators should send mail to dsj(a)merit.edu to fix the missing homeas problem.]
Transition Plan:
1. Continue submitting NACRs
2. Start learning RIPE 181
3. Set/Confirm your AS's Maintainer object for future security
4. Switch to using Route Templates (in December)
When it all works --RADB will be source for AS690 configuration, NACRs will
go away, use local registries
RADB to generate AS690 on second week of December.
NACRs to die at the end of that week.
European Operators' Forum Overview -- Peter Lothberg
[I missed this, so this information is from Stan Borinski]
Peter provided some humorous, yet interesting observations on the status of
the Internet in Europe.
To show the tremendous growth occurring in Europe as well, he gave an
example. After being out of capacity on their Stockholm E1 link for some
time, they finally installed another. It took one day for it to get it to
capacity! Unfortunately, the E1 costs $700,000/year.
[Back to my notes.... -- Stan Barber]
Proxy Aggregation -- CIDR by Yakov Rekhter
Assumptions -- Need to match the volume of routing information with the
available resources, while providing connectivity server -- on a per
provider basis. Need to match the amount of resource with the utility of
routing information -- on a per provider basis.
But what about "MORE THRUST?" It's not a good answer. Drives the costs up,
doesn't help with complexity of operations, eliminates small providers
Proxy aggregation -- A mechanism to allow aggregation of routing
information originated by sites that are BGP-4 incapable.
Proxy aggregation -- problems -- full consensus must exist for it to work.
Local aggregation -- to reconnect the entity that benefits from the
aggregation and the party that creates the aggregation. Bilateral
agreements would control the disposition of doing local aggregation. Doing
the aggregation at exit is better, but harder than doing it at entry.
Potential Candidates for Local Aggregation -- Longer prefix in presence of
a shorter prefix, Adjacent CIDR Blocks, Aggregation over known holes.
Routing in the presence of Local Aggregation --
AS and router that did the aggregation is identified via BGP
(AGGREGATOR attribute)
Should register in RRDB
Summary -- adding more memory to routers is not an answer
Regionals should aggregate their own CIDR blocks
An NSP may do local aggregation and register it in the RRDB.
Optimal routing and large scale routing are mutually exclusive.
CIDR is the only known technique to provide scalable routing in the Internet.
Large Internet and the ability of every site to control its own routing are
mutually exclusive.
Yakov also noted that 64Mb routers won't last as long as IPv4.
[More notes from Stan Borinski, while I was out again.]
Ameritech NAP Labs by Andy Schmidt
Ameritech performed tests with RFC 1323 kernel modifications on Sun Sparc
machines. A window of 32k was enabled at line speed. The AT&T switch used
by Ameritech has buffers that are orders of magnitude larger than other
vendors. All studies discussed showed bigger buffers were the key to
realizing ATM's performance capabilities.
[Back to my notes -- Stan Barber]
Sprint Network Reengineering -- Sean Doran
T-3 Network with sites in DC, Atlanta, Ft.Worth and Stockton currently.
Will be expanding to Seattle, Chicago and Sprint NAP in the next several
months. ICM uses this network for transit from one coast to the other. They
expect to create a separate ICM transit network early next year.
Next NANOG will be at NCAR in February.
PacBell NAP Status--Frank Liu
The Switch is a Newbridge 36-150.
NSFNET/ANS connected via Hayward today.
MCINET via Hayward today.
PB Labs via Concord today.
Sprintlink connected via San Jose (not yet).
NETCOM connected via Santa Clara in the next Month.
APEX Global Information Services (based in Chicago) will connect via Santa
Clara, but not yet.
The Packet Clearing House (consortium) for small providers connected via
Frame Relay to PB NAP. They will connect via one router to the NAP. It is
being led by Electric City's Chris Allen.
CIX connections are also in the cloud, but not in the same community yet.
Testing done by Bellcore and PB.
[TTCP was used for testing. The data was put up and removed quickly, so I
did lose some in taking notes.]
One source (TAXI/Sonet) -> One sink
Two Sources (TAXI/Sonet) -> One Sink
Five Sources (ethernet connected) ->One Sink (ethernet connected)
Equipment issues -- DSU HSSI Clock mismatch with the data rate (37 DSSI
clock rate versus 44 data rate versus a theoretical 52). Sink devices does
not have enough processing power to deal with large numbers of 512 byte
packets. Also, there was MTU mismatch issues between the SunOS (512 bytes)
machines used and the Solaris (536 bytes) machines used.
One Source-> One Sink
MSS Window Throughput (out of 40Mb/sec)
4470 51000 33.6
4470 25000 22.33
Two Source -> One Sink
4470 18000 33.17 (.05% cell loss, .04%
packet restrans)
1500 51000 15.41 (.69% cell loss, 2.76%
packet restrans)
Conclusions
Maximum throughput is 33.6 Mbps for the 1:1 connection.
Maximum throughput will be higher when the DSU HSSI clock and data-rate
mismatch is corrected.
Cell loss rate is low (.02% -- .69%).
Throughput degraded with the TCP window size is greater than 13000 bytes.
Large switch buffers and router traffic shaping are needed.
[The results appear to show TCP backing-off strategy engaging.]
Future Service Plan of the SF-NAP-- Chin Yuan
Currently, the NAP does best effort with RFC 1490 encapsulation.
March 1995 -- Variable Bit Rate, Sub-Rate Tariff (4,10,16,25,34 and 40Mbps
on 51, 100 and 140Mbps on OC3c). At CPE: Static Traffic Shaping and RFC
1483 and 1577 support [Traffic Shaping to be supported by Cisco later this
year in API card for both OC3c and T3.]
June 1995 -- Support for DS1 ATM (DXI and UNI at 128, 384 kbps and 1.4Mbps)
1996 or later -- Available Bit Rate and SVCs. At CPE: Dynamic Traffic Shaping
Notes on Variable Bit Rate:
Sustainable Cell Rate(SCR) and Maximum Burst Size (MBS)---
* Traffic Policing
* Aggregated SCR is no greater than the line rate
* MBS = 32, 100, 200 cells (Negotiable if > 200 cells)
Peak Cell Rate (possible)
* PCR <=line rate
Traffic shaping will be required for the more advanced services. Available
Bit Rate will require feedback to the router.
ANS on performance --- Curtis Villamizar
There are two problems: aggregation of lower-speed TCP flows, support for
high speed elastic supercomputer application.
RFC 1191 is very important as is RFC-1323 for these problems to be addressed.
RFC 1191 -- Path MTU discovery
RFC 1323 -- High Performance Extensions for TCP
The work that was done -- previous work showed that top speed for TCP was 30Mbs.
The new work -- TCP Single Flow, TCP Multiple Flow, using TCP RED
modifications (more Van Jacobson majic!) to handle multi-size windows.
Environment -- two different DS3 paths (NY->MICH: 20msec; NY->TEXAS->MICH:
68msec), four different versions of the RS6000 router software and Indy/SCs
Conditions -- Two background conditions (no background traffic, reverse TCP
flow intended to achieve 70-80% utilization)
Differing numbers of TCP flows.
Results are available on-line via http. Temporarily it is located at:
http://tweedledee.ans.net:8001:/
It will be on line rrdb.merit.edu more permanently.
It is important that vendors support RED and the two RFCs previously
mentioned to handle this problem. Also, Curtis believes that the results
presented by the NAP operators has little validity because there is no
delay as a component of their tests.
ATM -- What Tim Salo wants from ATM....
[I ran out of alertness, so I apologize to Tim for having extremely sketchy
notes on this talk.]
MAGIC -- Gigabit TestBed
Currently Local Area ATM switches over SONET. Mostly FORE switches.
LAN encapsulation (ATM Forum) versus RFC 1537
Stan | Academ Consulting Services |internet: sob(a)academ.com
Olan | For more info on academ, see this |uucp: bcm!academ!sob
Barber | URL- http://www.academ.com/academ |Opinions expressed are only mine.
3
2
Curtis,
Thanks for helping Stan build more comprehensive notes on what was, for me
at least, the most interesting RegionalTechs/NANOG meeting in recent memory.
I disagree with one statement you make several times...
(In the context of the Ameritech presentation)
> ... It was pointed out that with no delay in the tests,
> the delay bandwidth product of the TCP flows was near zero and it was
> asserted (by me actually) that results from such testing is not useful
> since real TCP flows going through a NAP have considerable delays.
I disagree that the lack of delay makes the tests 'not useful'. Yes, the
tests were much weaker than tests with delay. But such tests could still be
useful, since they demonstrate the presence or absence of problems other than
the problems caused by the bursty nature of wide-area high-speed TCP flows.
Actually, my criticism of the Ameritech presentation was that there was an
insufficiently clear statement of what they *had* learned. There were
allusions to problems with Cisco and workstations, but they were not specific
enough to be as helpful as they might have been.
>
> > PacBell NAP Status--Frank Liu
> >
> > [ ... ]
> >
> Again, no delay was added. Measured delay (ping time) was said to be
> 3 msec (presumably due to switching or slow host response). Again -
> It was pointed out that with no delay in the tests, the delay
> bandwidth product of the TCP flows was near zero and asserted that
> results from such testing is not useful.
>
Again, I disagree. Even in the absence of delay, some problems, including the
Kentrox bit-rate mismatch, could be observed and the loads that caused packet
loss could be observed.
> > ANS on performance --- Curtis Vallamizar
>
> The difficulty in carrying TCP traffic is proportional to the delay
> bandwidth product of the traffic, not just the bandwidth. Adding
> delay makes the potential for bursts sustained over a longer period.
> Real networks have delay. US cross continent delay is 70 msec.
>
> ANSNET results were given using improved software which improved
> buffer capacity, intentionally crippled software (artificially limited
> buffering), and software which included Random Early Detection (RED -
> described in a IEEE TON paper by Floyd and Jacobson). Sustained
> goodput rates of up to 40-41 Mb/s were acheived using ttcp and 1-8 TCP
> flows. Some pathelogical cases were demonstrated in which much worse
> performace was acheived. These case mostly involve too little
> buffering at the congestion point (intentionally crippled router code
> was used to demostrate this) or using a single TCP flow and setting
> the TCP window much too large (3-5 times the delay bandwidth product).
> The latter pathelogic case can be avoided if the routers implement
> RED. The conclusions were: 1) routers need buffer capacity as large
> as the delay bandwidth product and 2) routers should impement RED.
>
> Only a 20 msec delay was added to the prototype NAP testing. Results
> with the prototype NAP and 20 msec delay were very poor compared to
> the performance of unchannelized DS3. Prototype NAP testing results
> were poor compared to Ameritec and Pacbell results due to the more
> realistic delay bandwidth product. Worse results can be expected with
> a 70 msec delay and may be better indications of actual performance
> when forwarding real traffic. More testing is needed after fixes to
> ADSUs are applied. A sufficient bottleneck can not be created at the
> switch until ADSU problems are addressed.
>
> There was some discussion (at various times during the presentations)
> of what this all means for the NAPs. If I may summarize - On the
> positive side the Ameritec switch has more buffering than the Fore
> used in the Bellcore prototype NAP. On the negative side, Ameritec
> didn't include any delay in their testing. NAP testing results (both
> positive results from Amertic, mixed results from PacBel and negative
> results from ANS) are inconclusive so far.
Right. I really hope that everyone at the NANOG meeting took your results to
heart. It will be very important for the success of the multi-backbone
NSFnet world.
I agree that all three sets of tests should currently be regarded as partial
and, to some extent, inconclusive as you say. I hope that all those,
including BellCore, Ameritech, PacBell, and ANS continue with these tests and
with the sharing of the testing results.
One quibble: pathelogical should be spelled 'pathological'.
The root word is pathos, from which we also get the word sympathy. I guess
that those engaged in testing pathological ATM NAP situations should have more
sympathy for each other.
Cheers,
-- Guy
2
1
>> John Scudder then discussed some modeling he and Sue Hares have done
>> on the projected load at the NAPs. The basic conclusions are that the
>> FDDI technology (at Sprint) will be saturated sometime next year...
>MarkFedor/ColeLibby from PSI said there was a "quiet" admission that the old
>methodology was already "approved" for the SPRINT NAP.
This technology works *now* (Sprint NAP is already in production use)
and there is a clear path for upgrading it to practically unlimited
aggregate bandwidths:
1) replace FDDI concentrators with FDDI switch
(this gives us up to 1Gbps of total bandwidth
while limiting access to one full DS-3)
2) replace single FDDI with multiple FDDIs (realistically
with 2 or 3) -- allowing access at OC-3 speeds
3) build point-to-point bypasses between peers with
OC-3s or FDDI to take the traffic out of shared
medium.
We will hit limitations of the present/then-future routing technology
long before we'll exhaust the possibilities to increase the
NAP aggregate and access bandwidth by cheap incremental upgrades.
That's the whole point of Sprint NAP architecture. ATM/SMDS/Flame Delay
do not get even close to what we achieve (and ATM is not useful yet).
Load balancing between NAPs is necessary, but for entirely different
reasons -- first, because of capacity limitation on nation-wide
backbones and, second, to reduce latencies on cross-ISP traffic.
--vadim
1
0
Stan,
It is very helpful to have these notes from the meeting. Thanks. --Mark
At 11:22 PM 10/25/94, Stan Barber wrote:
>Here are my notes from the recent NANOG meeting. Please note that any
>mistakes are mine. Corrections, providing missing information, or futher
>exposition of
>any of the information here will be gratefully accepted and added to this
>document which will be available via anonymous ftp later this month.
>
>----------------------------------------------------------------------------
>NANOG
>Notes by Stan Barber <sob(a)academ.com>
>[Please note that any errors are mine, and I'd appreciate corrections being
>forwarded to me.]
>
>Elise Gerich opened the meeting with Merit's current understanding of the
>state of the transition. THENET, CERFNET and MICHNET have expressed
>specific dates for transition.
>
>John Scudder then discussed some modelling he and Sue Hares have done on
>the projected load at the NAPs. The basic conclusions are that the FDDI
>technology (at Sprint) will be saturated sometime next year and that
>load-balancing strategies among NSPs across the NAPS is imperative for the
>long term viability of the new architecture. John also expressed concern
>over the lack of expressed policy for the collection of statistical data by
>the NAP operators. All of the NAP operator are present and stated that they
>will collect data, but that there are serious and open questions concerning
>the privacy of that data and how to publish it appropriately. John said
>that collecting the data was most important. Without the data, there is no
>source information from which publication become possible. He said that
>MERIT/NSFNET had already tackled these issues. Maybe the NAP operators can
>use this previous work as a model to develop their own policies for
>publication.
>
>After the break, Paul Vixie discussed the current status of the DNS and
>BIND. Specifically, he discusses DNS security. There are two reasons why
>DNS are not secure. There are two papers on this topic and they are both in
>the current BIND kit. So the information is freely available.
>
>Consider the case of telnetting across the Internet and getting what
>appears to be your machine's login banner. Doing a double check
>(host->address, then address->host) will help eliminate this problem.
>hosts.equiv and .rhosts are also sources of problems. Polluting the cache
>is a real problem. Doing UDP flooding is another problem. CERT says that
>doing rlogin is bad, but that does not solve the cache pollution problem.
>
>How to defend?
>
>1. Validate the packets returned in a response to the query. Routers should
>drop UDP packets on which the source address don't match what it should be.
>(e.g. a udp packet comes in on a WAN link that should have come in via an
>ethernet interface).
>
>2. There are a number of static validations of packet format that can be
>done. Adding some kind of cryptographic information to the DNS would help.
>Unfortunately, this moves very slowly because there are a number of strong
>conflicting opinions.
>
>What is being done?
>
>The current BETA of BIND has almost everything fixed that can be fixed
>without a new protocol. Versions prior 4.9 are no longer supported.
>
>Paul is funded half-time by the Internet Software Consortium. Rick Adams
>funds it via UUNET's non-profit side. Rick did not want to put it under
>GNU.
>
>DNS version 2 is being discussed. This is due to the limit in the size of
>the udp packet. Paul M. and Paul V. are working to say something about
>this at the next IETF.
>
>HP, Sun, DEC and SGI are working with Paul to adopt the 4.9.3 BIND once it
>is productional.
>
>After this comes out, Paul will start working on other problems. One
>problem is the size of BIND in core. This change will include using the
>Berkeley db routing to feed this from a disk-based database.
>
>There will also be some effort for helping doing load-balancing better.
>
>What about service issues? Providing name service is a start.
>
>DEC and SGI will be shipping BIND 4.9.3 will be shipping it with the next
>release.
>
>Paul has talked to Novell, but noone else....Novell has not been a helpful
>from the non-Unix side.
>
>
>RA Project : Merit and ISI with a subcontract with IBM
>
>ISI does the Route Server Development and the RA Futures
>Merit does the Routing Registry Databases and Network Management
>
>The Global Routing Registry consists of the RADB, various private routing
>registries, RIPE and APNIC. The RADB will be used to generate route server
>configurations and potentially router configurations.
>
>1993 -- RIPE 81
>1994 -- PRIDE tools
>April 1994 -- Merit Routing Registry
>September 1994 -- RIPE-181
>October 1994 -- RIPE-181 Software implementation
>November 1994 -- NSP Policy Registrations/Route Server Configurations
>
>Why use the RADB? Troubleshooting, Connectivity, Stability
>
>The Route Server by ISI with IBM
>
>They facilitate routeing information exchange. They don't forward packets.
>There are two at each NAP with one AS number. They provide routing
>selection and distribution on behalf of clients (NSPs). [Replication of
>gated single table use = view] Multiple views to support clients with
>dissimilar route selection and/or distribution policies. BGP4 and BGP4 MIB
>are supported. RS's AS inserted in AS path, MED is passed unmodified (this
>appears controversal).
>
>The Route Servers are up and running on a testbed and have been tested with
>up to 8 peers and 5 views. Target ship date to 3 NAPS is October 21. The
>fourth will soon follow.
>
>The Network Management aspect of the RA project uses a Hierarchically
>Distributed Network Management Model. At the NAP, only local NM Traffic,
>externalizes NAP Problems, SNMPv1 and SNMPv2 are supported. OOB Access
>provides seamless PPP backup & console port access. Remote debugging
>enviromnent is identical to local debugging environment.
>The Centralizes Network Management System at Merit polls distributed rovers
>for problems, consolidates the problems into ROC alert screen. It is
>operational on August 1st which is operated by the University of Michigan
>Network Systems at the same location as the previous NSFNET NOC. 24/7 human
>operator coverage.
>
>Everything should be operational by the end of November.
>
>Routing Futures -- Route Server decoupling packet forwarding from routing
>inforation exchange, scalability and modularity. For example, explicit
>routing will be supported (with the development of ERP). IPv6 will be
>provided. Doing analysis of RRDB and define a general policy language
>(backward compatible with RIPE 181). Routing policy consistany and
>aggregration will be developed.
>
>Securing the route servers -- All of the usual standard mechanisms are
>being applied. Single-use passwords.... mac-layer bridges .... etc....How
>do we keep the routes from getting screwed intentionally? Denial of service
>attacks are possible.
>
>A design document on the route server will be available via the
>RRDB.MERIT.EDU WWW server.
>
>There is a serious concern to sychronization of the route servers and the
>routing registries. No solution has been implemented currently. Merit
>believes that will do updates at least once a day.
>
>Conversion from PRDB to RRDB
>
>The PRDB is AS 690 specific, NCARs, twice weekly and AUP constrained.
>
>The RADB has none of these features.
>
>Migration will occur before April of 1995. The PRDB will be temporarily
>part of the Global Routing Registry during transition.
>
>Real soon now -- Still send NCAR and it will be entered into PRDB and RRDB.
>Constancy checking will be more automated. Output for AS 690 will be
>compared from both to check consistancy. While this is happening, users
>will do what they always have. [Check ftp.ra.net for more information.]
>
>There is alot of concern among the NANOG participants about the correctness
>of all the information in the PRDB. Specifically, there appears to be some
>inaccuracy (homeas) of the information. ESnet has a special concern about
>this.
>
>[dsj(a)merit.edu to fix the missing homeas problem]
>
>Transition Plan:
>1. Continue submitting NACRs
>2. Start learning RIPE 181
>3. Set/Confirm your AS's Maintainer object for future security
>4. Switch to using Route Templates (in December)
>
>
>When it all works --RADB will be source for AS690 configuration, NCARs will
>go away, use local registries
>
>RADB to generate AS690 on second week of December.
>NACRs to die at the end of that week.
>
>Proxy Aggregation -- CIDR by Yakov Rekhter
>
>Assumptions -- Need to match the volume of routing information with the
>available resources, while providing connectivity server -- on a per
>provider basis. Need to match the amount of resource with the utility of
>routing information -- on a per provider basis.
>
>But what abaout "MORE THRUST?" It's not a good answer. Drives the costs up,
>doesn't help with complexity of operations, eliminates small providers
>
>Proxy aggregation -- A mechanism to allow aggregation of routing
>information originated by sites that are BGP-4 incapable.
>
>Proxy aggregation -- problems -- full consensus must exist for it to work.
>
>Local aggregation -- to reconnect the entity that benefits from the
>aggregation and the party that creates the aggregation. Bilateral
>agreements would control the disposition of doing local aggregation.
>
>Potential Candidates for Local Aggregation -- Longer prefix in presence of
>a shorter prefix, Adjacent CIDR Blocks, Aggregation over known holes.
>
>Routing in the presens of Local Aggregation --
> AS and router that did the aggregation is identified via BGP
>(AGGREGATOR attribute)
> Should register in RRDB
>Summary -- adding more memory to routers is not an answer
>Regionals should aggregate their own CIDR blocks
>An NSP may do local aggregation and register it in the RRDB.
>
>Optimal routing and large scale routing are mutually exclusive.
>CIDR is the only known technique to provide scalable routing in the Internet.
>Large Internet and the ability of every site to control its own routing are
>mutually exclusive.
>
>Sprint Network Reengineering
>
>T-3 Network with sites in DC, Atlanta, Ft.Worth and Stockton currently.
>Will be expanding to Seattle, Chicago and Sprint NAP in the next several
>months. ICM uses this network for transit from one coast to the other. They
>expect to create a seperate ICM transit network early next year.
>
>Next NANOG will be at NCAR in February.
>
>PacBell NAP Status--Frank Liu
>
>The Switch is a Newbridge 36-150.
>
>NSFNET/ANS connected via Hayward today.
>MCINET via Hayward today.
>PB Labs via Concord today.
>
>Sprintlink connected via SanJose (not yet).
>
>NETCOM connected via Santa Clara in the next Month.
>
>APEX Global Information Services (based in Chicago) will connect via Santa
>Clara, but not yet.
>
>The Packet Clearing House (consortium) for small providers connected via
>Frame Relay to PB NAP. They will connect via one router to the NAP. It is
>being led by Electric City's Chris Allen.
>
>CIX connections are also in the cloud, but not in the same community yet.
>
>Testing done by Bellcore and PB.
>[TTCP was used for testing. The data was put up and removed quickly, so I
>did lose some in taking notes.]
>One source (TAXI/Sonet) -> One sink
>Two Sources (TAXI/Sonet) -> One Sink
>
>Five Sources (ethernet connected) ->One Sink (ethernet connected)
>
>Equipment issues -- DSU HSSI Clock mistmatch with the data rate. Sink
>devices does not have enough processing power to deal with large numbers of
>512 byte packets.
>
>One Source-> One Sink
>
>MSS Window Througput (out of 40Mb/sec)
>4470 51000 33.6
>4470 25000 22.33
>
>
>Two Source -> One Sink
>
>4470 18000 33.17 (.05% cell loss, .04%
>packet restrans)
>1500 51000 15.41 (.69% cell loss, 2.76%
>packet restrans)
>
>
>Conclusions
>
>Maximum througput is 33.6 Mbps for the 1:1 connection.
>
>Maximum througput will be higher when the DSU HSSI clock and data-rate
>mistmatch is corrected.
>
>Cell loss rate is low (.02% -- .69%).
>
>Througput degraded with the TCP window size is greater than 13000 bytes.
>
>Large switch buffers and router traffic shaping are needed.
>
>[The results appear to show TCP backing-off strategy engaging.]
>
>Future Service Plan of the SF-NAP-- Chin Yuan
>
>Currently, the NAP does best effort with RFC 1490 encapsulation.
>
>March 1995 -- Variable Bit Rate, Sub-Rate Tariff (4,10,16,25,34 and 40Mbps
>on 51, 100 and 140Mbps on OC3c). At CPE: Static Traffic Shaping and RFC
>1483 and 1577 support [Traffic Shaping to be supported by Cisco later this
>year in API card for both OC3c and T3.]
>
>June 1995 -- Support for DS1 ATM (DXI and UNI at 128, 384 kbps and 1.4Mbps)
>
>1996 or later -- Available Bit Rate and SVCs. At CPE: Dynamic Traffic Shaping
>
>Notes on Variable Bit Rate:
>Sustainable Cell Rate(SCR) and Maximum Burst Size (MBS)---
> * Traffic Policing
> * Aggregated SCR is no greater than the line rate
> * MBS = 32, 100, 200 cells (Negotiable if > 200 cells)
>Peak Cell Rate (possible)
> * PCR <=line rate
>
>Traffic shaping will be required for the more advanced services. Available
>Bit Rate will require feedback to the router.
>
>
>ANS on performance --- Curtis Vallamizar
>There are two problems: aggregation of lower-speed TCP flows, support for
>high speed elastic supercomputer application.
>
>RFC 1191 is very important as is RFC-1323 for these problems to be addressed.
>
>The work that was done -- previous work showed that top speed for TCP was
>30Mbs.
>
>The new work -- TCP Single Flow, TCP Multiple Flow
>
>Environment -- two different DS3 paths (NY->MICH: 20msec; NY->TEXAS->MICH:
>68msec), four different versions of the RS6000 router software and Indy/SCs
>
>Conditions -- Two backround conditions (no backround traffic, reverse TCP
>flow intended to achive 70-80% utilization)
>Differing numbers of TCP flows.
>
>Results are available on-line via http. Temporarily it is located at:
>
>http://tweedledee.ans.net:8001:/
>
>It will be on line rrdb.merit.edu more permanently.
>
>ATM -- What Tim Salo wants from ATM....
>[I ran out of alertness, so I apologize to Tim for having extremely sketchy
>notes on this talk.]
>
>MAGIC -- Gigabit TestBed
>
>Currently Local Area ATM switches over SONET. Mostly FORE switches.
>
>Lan encapsuation (ATM Forum) versus RFC 1537
>
>Stan Barber sob(a)academ.com
1
0
I am anticpating that your slides will be put up on prdb.merit.edu, so I
don't think I will need to get your slides since they will have them
as well as my notes on their server.
I was going to followup my notes (which I was trying to keep opinion nutral)
with another note with some of the various opinions expressed about what
was said. You have done part of that for me with your analysis of the
NAP talks.
Thanks for your comments.
--
Stan | Academ Consulting Services |internet: sob(a)academ.com
Olan | For more info on academ, see this |uucp: bcm!academ!sob
Barber | URL- http://www.academ.com/academ |Opinions expressed are only mine.
1
0
Please subscribe me to the list nanog
hchen(a)aimnet.com
Hong Chen, Ph.D.
General Manager
Aimnet Information Services
408-257-0900
1
0
Here are my notes from the recent NANOG meeting. Please note that any
mistakes are mine. Corrections, providing missing information, or futher
exposition of
any of the information here will be gratefully accepted and added to this
document which will be available via anonymous ftp later this month.
----------------------------------------------------------------------------
NANOG
Notes by Stan Barber <sob(a)academ.com>
[Please note that any errors are mine, and I'd appreciate corrections being
forwarded to me.]
Elise Gerich opened the meeting with Merit's current understanding of the
state of the transition. THENET, CERFNET and MICHNET have expressed
specific dates for transition.
John Scudder then discussed some modelling he and Sue Hares have done on
the projected load at the NAPs. The basic conclusions are that the FDDI
technology (at Sprint) will be saturated sometime next year and that
load-balancing strategies among NSPs across the NAPS is imperative for the
long term viability of the new architecture. John also expressed concern
over the lack of expressed policy for the collection of statistical data by
the NAP operators. All of the NAP operator are present and stated that they
will collect data, but that there are serious and open questions concerning
the privacy of that data and how to publish it appropriately. John said
that collecting the data was most important. Without the data, there is no
source information from which publication become possible. He said that
MERIT/NSFNET had already tackled these issues. Maybe the NAP operators can
use this previous work as a model to develop their own policies for
publication.
After the break, Paul Vixie discussed the current status of the DNS and
BIND. Specifically, he discusses DNS security. There are two reasons why
DNS are not secure. There are two papers on this topic and they are both in
the current BIND kit. So the information is freely available.
Consider the case of telnetting across the Internet and getting what
appears to be your machine's login banner. Doing a double check
(host->address, then address->host) will help eliminate this problem.
hosts.equiv and .rhosts are also sources of problems. Polluting the cache
is a real problem. Doing UDP flooding is another problem. CERT says that
doing rlogin is bad, but that does not solve the cache pollution problem.
How to defend?
1. Validate the packets returned in a response to the query. Routers should
drop UDP packets on which the source address don't match what it should be.
(e.g. a udp packet comes in on a WAN link that should have come in via an
ethernet interface).
2. There are a number of static validations of packet format that can be
done. Adding some kind of cryptographic information to the DNS would help.
Unfortunately, this moves very slowly because there are a number of strong
conflicting opinions.
What is being done?
The current BETA of BIND has almost everything fixed that can be fixed
without a new protocol. Versions prior 4.9 are no longer supported.
Paul is funded half-time by the Internet Software Consortium. Rick Adams
funds it via UUNET's non-profit side. Rick did not want to put it under
GNU.
DNS version 2 is being discussed. This is due to the limit in the size of
the udp packet. Paul M. and Paul V. are working to say something about
this at the next IETF.
HP, Sun, DEC and SGI are working with Paul to adopt the 4.9.3 BIND once it
is productional.
After this comes out, Paul will start working on other problems. One
problem is the size of BIND in core. This change will include using the
Berkeley db routing to feed this from a disk-based database.
There will also be some effort for helping doing load-balancing better.
What about service issues? Providing name service is a start.
DEC and SGI will be shipping BIND 4.9.3 will be shipping it with the next
release.
Paul has talked to Novell, but noone else....Novell has not been a helpful
from the non-Unix side.
RA Project : Merit and ISI with a subcontract with IBM
ISI does the Route Server Development and the RA Futures
Merit does the Routing Registry Databases and Network Management
The Global Routing Registry consists of the RADB, various private routing
registries, RIPE and APNIC. The RADB will be used to generate route server
configurations and potentially router configurations.
1993 -- RIPE 81
1994 -- PRIDE tools
April 1994 -- Merit Routing Registry
September 1994 -- RIPE-181
October 1994 -- RIPE-181 Software implementation
November 1994 -- NSP Policy Registrations/Route Server Configurations
Why use the RADB? Troubleshooting, Connectivity, Stability
The Route Server by ISI with IBM
They facilitate routeing information exchange. They don't forward packets.
There are two at each NAP with one AS number. They provide routing
selection and distribution on behalf of clients (NSPs). [Replication of
gated single table use = view] Multiple views to support clients with
dissimilar route selection and/or distribution policies. BGP4 and BGP4 MIB
are supported. RS's AS inserted in AS path, MED is passed unmodified (this
appears controversal).
The Route Servers are up and running on a testbed and have been tested with
up to 8 peers and 5 views. Target ship date to 3 NAPS is October 21. The
fourth will soon follow.
The Network Management aspect of the RA project uses a Hierarchically
Distributed Network Management Model. At the NAP, only local NM Traffic,
externalizes NAP Problems, SNMPv1 and SNMPv2 are supported. OOB Access
provides seamless PPP backup & console port access. Remote debugging
enviromnent is identical to local debugging environment.
The Centralizes Network Management System at Merit polls distributed rovers
for problems, consolidates the problems into ROC alert screen. It is
operational on August 1st which is operated by the University of Michigan
Network Systems at the same location as the previous NSFNET NOC. 24/7 human
operator coverage.
Everything should be operational by the end of November.
Routing Futures -- Route Server decoupling packet forwarding from routing
inforation exchange, scalability and modularity. For example, explicit
routing will be supported (with the development of ERP). IPv6 will be
provided. Doing analysis of RRDB and define a general policy language
(backward compatible with RIPE 181). Routing policy consistany and
aggregration will be developed.
Securing the route servers -- All of the usual standard mechanisms are
being applied. Single-use passwords.... mac-layer bridges .... etc....How
do we keep the routes from getting screwed intentionally? Denial of service
attacks are possible.
A design document on the route server will be available via the
RRDB.MERIT.EDU WWW server.
There is a serious concern to sychronization of the route servers and the
routing registries. No solution has been implemented currently. Merit
believes that will do updates at least once a day.
Conversion from PRDB to RRDB
The PRDB is AS 690 specific, NCARs, twice weekly and AUP constrained.
The RADB has none of these features.
Migration will occur before April of 1995. The PRDB will be temporarily
part of the Global Routing Registry during transition.
Real soon now -- Still send NCAR and it will be entered into PRDB and RRDB.
Constancy checking will be more automated. Output for AS 690 will be
compared from both to check consistancy. While this is happening, users
will do what they always have. [Check ftp.ra.net for more information.]
There is alot of concern among the NANOG participants about the correctness
of all the information in the PRDB. Specifically, there appears to be some
inaccuracy (homeas) of the information. ESnet has a special concern about
this.
[dsj(a)merit.edu to fix the missing homeas problem]
Transition Plan:
1. Continue submitting NACRs
2. Start learning RIPE 181
3. Set/Confirm your AS's Maintainer object for future security
4. Switch to using Route Templates (in December)
When it all works --RADB will be source for AS690 configuration, NCARs will
go away, use local registries
RADB to generate AS690 on second week of December.
NACRs to die at the end of that week.
Proxy Aggregation -- CIDR by Yakov Rekhter
Assumptions -- Need to match the volume of routing information with the
available resources, while providing connectivity server -- on a per
provider basis. Need to match the amount of resource with the utility of
routing information -- on a per provider basis.
But what abaout "MORE THRUST?" It's not a good answer. Drives the costs up,
doesn't help with complexity of operations, eliminates small providers
Proxy aggregation -- A mechanism to allow aggregation of routing
information originated by sites that are BGP-4 incapable.
Proxy aggregation -- problems -- full consensus must exist for it to work.
Local aggregation -- to reconnect the entity that benefits from the
aggregation and the party that creates the aggregation. Bilateral
agreements would control the disposition of doing local aggregation.
Potential Candidates for Local Aggregation -- Longer prefix in presence of
a shorter prefix, Adjacent CIDR Blocks, Aggregation over known holes.
Routing in the presens of Local Aggregation --
AS and router that did the aggregation is identified via BGP
(AGGREGATOR attribute)
Should register in RRDB
Summary -- adding more memory to routers is not an answer
Regionals should aggregate their own CIDR blocks
An NSP may do local aggregation and register it in the RRDB.
Optimal routing and large scale routing are mutually exclusive.
CIDR is the only known technique to provide scalable routing in the Internet.
Large Internet and the ability of every site to control its own routing are
mutually exclusive.
Sprint Network Reengineering
T-3 Network with sites in DC, Atlanta, Ft.Worth and Stockton currently.
Will be expanding to Seattle, Chicago and Sprint NAP in the next several
months. ICM uses this network for transit from one coast to the other. They
expect to create a seperate ICM transit network early next year.
Next NANOG will be at NCAR in February.
PacBell NAP Status--Frank Liu
The Switch is a Newbridge 36-150.
NSFNET/ANS connected via Hayward today.
MCINET via Hayward today.
PB Labs via Concord today.
Sprintlink connected via SanJose (not yet).
NETCOM connected via Santa Clara in the next Month.
APEX Global Information Services (based in Chicago) will connect via Santa
Clara, but not yet.
The Packet Clearing House (consortium) for small providers connected via
Frame Relay to PB NAP. They will connect via one router to the NAP. It is
being led by Electric City's Chris Allen.
CIX connections are also in the cloud, but not in the same community yet.
Testing done by Bellcore and PB.
[TTCP was used for testing. The data was put up and removed quickly, so I
did lose some in taking notes.]
One source (TAXI/Sonet) -> One sink
Two Sources (TAXI/Sonet) -> One Sink
Five Sources (ethernet connected) ->One Sink (ethernet connected)
Equipment issues -- DSU HSSI Clock mistmatch with the data rate. Sink
devices does not have enough processing power to deal with large numbers of
512 byte packets.
One Source-> One Sink
MSS Window Througput (out of 40Mb/sec)
4470 51000 33.6
4470 25000 22.33
Two Source -> One Sink
4470 18000 33.17 (.05% cell loss, .04%
packet restrans)
1500 51000 15.41 (.69% cell loss, 2.76%
packet restrans)
Conclusions
Maximum througput is 33.6 Mbps for the 1:1 connection.
Maximum througput will be higher when the DSU HSSI clock and data-rate
mistmatch is corrected.
Cell loss rate is low (.02% -- .69%).
Througput degraded with the TCP window size is greater than 13000 bytes.
Large switch buffers and router traffic shaping are needed.
[The results appear to show TCP backing-off strategy engaging.]
Future Service Plan of the SF-NAP-- Chin Yuan
Currently, the NAP does best effort with RFC 1490 encapsulation.
March 1995 -- Variable Bit Rate, Sub-Rate Tariff (4,10,16,25,34 and 40Mbps
on 51, 100 and 140Mbps on OC3c). At CPE: Static Traffic Shaping and RFC
1483 and 1577 support [Traffic Shaping to be supported by Cisco later this
year in API card for both OC3c and T3.]
June 1995 -- Support for DS1 ATM (DXI and UNI at 128, 384 kbps and 1.4Mbps)
1996 or later -- Available Bit Rate and SVCs. At CPE: Dynamic Traffic Shaping
Notes on Variable Bit Rate:
Sustainable Cell Rate(SCR) and Maximum Burst Size (MBS)---
* Traffic Policing
* Aggregated SCR is no greater than the line rate
* MBS = 32, 100, 200 cells (Negotiable if > 200 cells)
Peak Cell Rate (possible)
* PCR <=line rate
Traffic shaping will be required for the more advanced services. Available
Bit Rate will require feedback to the router.
ANS on performance --- Curtis Vallamizar
There are two problems: aggregation of lower-speed TCP flows, support for
high speed elastic supercomputer application.
RFC 1191 is very important as is RFC-1323 for these problems to be addressed.
The work that was done -- previous work showed that top speed for TCP was 30Mbs.
The new work -- TCP Single Flow, TCP Multiple Flow
Environment -- two different DS3 paths (NY->MICH: 20msec; NY->TEXAS->MICH:
68msec), four different versions of the RS6000 router software and Indy/SCs
Conditions -- Two backround conditions (no backround traffic, reverse TCP
flow intended to achive 70-80% utilization)
Differing numbers of TCP flows.
Results are available on-line via http. Temporarily it is located at:
http://tweedledee.ans.net:8001:/
It will be on line rrdb.merit.edu more permanently.
ATM -- What Tim Salo wants from ATM....
[I ran out of alertness, so I apologize to Tim for having extremely sketchy
notes on this talk.]
MAGIC -- Gigabit TestBed
Currently Local Area ATM switches over SONET. Mostly FORE switches.
Lan encapsuation (ATM Forum) versus RFC 1537
Stan Barber sob(a)academ.com
2
1
I have added you to the nanog mail list.
Pam Ciesla
1
0
------------------------------------------------------------------------------
CIDR Progress Report: 8929 Nets, 99 ASs, 1859 Aggregates. Details below.
------------------------------------------------------------------------------
The following changes have been made to the NSFNET policy-based routing
database and will be installed on the backbone by 08:00 EDT:
Total = As + Bs + Cs + Aggs
Registered Networks 41261 = 30 5056 34295 1880
Configured Networks 38763 = 30 5001 31872 1860
Added Networks 187 = 0 7 157 23
Deleted Networks 10 = 0 0 9 1
IP address Net name Country Priority:AS
---------- -------- ------- -----------
133.10/16 TMIT-NET C:JP 1:1240 2:1800
133.168/16 PRUGNET C:JP 1:701(136) 2:701(134)
138.184/16 CISCO-BLOCK30 C:US 1:19 2:568
144.254/16 CISCO-SHONET C:US 1:1800 2:1240 3:1133 4:1674
157.166.161/24 TECHWOOD C:US 1:1322
157.166.204/24 TECHWOOD C:US 1:1322
161.221/16 BARNES-NOBLE C:US 1:2551(229) 2:2551(136) 3:1321
163.159/16 SI-SDK-NET C:SI 1:701(136) 2:701(134) 3:1800
168.160.0/24 SSTC-ISTIC C:CN 1:3429 2:293(144) 3:293(145)
168.200/16 UHCOLORADO C:US 1:209 2:210
192.31.74/24 GSD-PCNET C:US 1:1740
192.41.10/23 ICON-CNETS C:US 1:1321 2:2551(229) 3:2551(136)
192.100.164/24 UQROO C:MX 1:278 2:1328
192.106.148/24 DIAKRON-NET C:IT 1:701(136) 2:701(134) 3:1800
192.107.91/24 ENEA-DISP-IP1 C:IT 1:293(144) 2:293(145) 3:1133 4:1674
192.190.208/21 SW-NET C:AU 1:372 2:297
192.197.172/24 BCTEL.NET C:CA 1:1331
193.65.102/24 INFOSTO1 C:FI 1:701(136) 2:701(134) 3:1800
193.65.103/24 INFOSTO2 C:FI 1:701(136) 2:701(134) 3:1800
193.65.108/24 DECUS1 C:FI 1:701(136) 2:701(134) 3:1800
193.65.109/24 BITFIELD1 C:FI 1:701(136) 2:701(134) 3:1800
193.65.110/24 AMI1 C:FI 1:701(136) 2:701(134) 3:1800
193.65.111/24 AMI2 C:FI 1:701(136) 2:701(134) 3:1800
193.96.238/24 MUS C:DE 1:701(136) 2:701(134) 3:1800
193.101.4/24 MUS-DE C:DE 1:701(136) 2:701(134) 3:1800
193.101.5/24 MUS-DE C:DE 1:701(136) 2:701(134) 3:1800
193.104.162/24 FR-CFM C:FR 1:701(136) 2:701(134) 3:1800
193.104.220/24 FR-RENAULT-DN C:FR 1:701(136) 2:701(134) 3:1800
193.105.146/24 FR-SOC-GEN17 C:FR 1:701(136) 2:701(134) 3:1800
193.185.109/24 AKH-TRE-NET C:FI 1:701(136) 2:701(134) 3:1800
193.202.75/24 INTEROP-EU2 C:FR 1:1800 2:1240 3:1133 4:1674
193.242.94/24 ECSSR C:AE 1:701(136) 2:701(134) 3:1800
194.67.22/24 SUMMIT C:RU 1:1240 2:1800 3:1239
194.98.1/24 IWAY-NET C:FR 1:1800 2:1240 3:1133 4:1674
194.98.2/24 IWAY-NET C:FR 1:1800 2:1240 3:1133 4:1674
194.98.3/24 IWAY-NET C:FR 1:1800 2:1240 3:1133 4:1674
194.98.4/24 IWAY-NET C:FR 1:1800 2:1240 3:1133 4:1674
194.98.5/24 IWAY-NET C:FR 1:1800 2:1240 3:1133 4:1674
194.98.6/24 IWAY-NET C:FR 1:1800 2:1240 3:1133 4:1674
194.98.7/24 IWAY-NET C:FR 1:1800 2:1240 3:1133 4:1674
198.53.180/24 NETBLK-FONOROLA C:CA 1:2493(35) 2:2493(91)
198.53.181/24 NETBLK-FONOROLA C:CA 1:2493(35) 2:2493(91)
198.53.182/24 NETBLK-FONOROLA C:CA 1:2493(35) 2:2493(91)
198.53.183/24 NETBLK-FONOROLA C:CA 1:2493(35) 2:2493(91)
198.77.131/24(U) FITTEST-CAL-MD-US C:US 1:86 2:279
198.77.136/24(U) KEVRIC-MO-MD-US C:US 1:86 2:279
198.79.122/24(U) IAT-NORCROSS-GA-US-2 C:US 1:279 2:86
198.139.123/24 NETBLK-RAYTHEON C:US 1:97
198.139.124/24 NETBLK-RAYTHEON C:US 1:97
198.160.147/24 NETBLK-NETBLK-MCS-C C:US 1:1239 2:1800 3:1240 4:3830
198.180.191/24 CMIWC-CMI C:US 1:2149 2:174
198.180.215/24 CMIINT-CMI C:US 1:2149 2:174
198.184.184/24 PRNEWSWIRE C:US 1:1321 2:2386
198.186.167/24 CMIPB-CMI C:US 1:2149 2:174
198.224.128/24 NETBLK-CDPD-CBLK2 C:US 1:1324(32) 2:1324(35)
198.224.131/24 NETBLK-CDPD-CBLK2 C:US 1:1324(32) 2:1324(35)
199.3.10/23 NETBLK-NETBLK-MCS-C C:US 1:1239 2:1800 3:1240 4:3830
199.3.32/19 NETBLK-NETBLK-MCS-C C:US 1:1239 2:1800 3:1240 4:3830
199.3.160/19 NETBLK-NETBLK-MCS-C C:US 1:1239 2:1800 3:1240 4:3830
199.75.48/24(U) FERC-FED-US C:US 1:86 2:279
199.75.50/24(U) USTA-ORG C:US 1:86 2:279
199.75.51/24(U) SERDP-GOV C:US 1:86 2:279
199.75.96/21(U) OAS-ORG C:US 1:86 2:279
199.75.104/22(U) OAS-ORG C:US 1:86 2:279
199.77.0/19(U) SANTAROSA-K12-FL-US C:US 1:279 2:86
199.78.118/23(U) SRRC-USDA-GOV C:US 1:279 2:86
199.78.124/24(U) RICHLAND-LIB-SC-US C:US 1:279 2:86
199.78.240/21(U) BARRYNET-2 C:US 1:279 2:86
199.84.52/24 SYNAPSE2-DOM C:CA 1:2493(35) 2:2493(91)
199.84.53/24 SYNAPSE2-DOM C:CA 1:2493(35) 2:2493(91)
199.84.54/24 SYNAPSE2-DOM C:CA 1:2493(35) 2:2493(91)
199.92.204/24 TRELLIS C:US 1:560 2:701(136) 3:701(134)
199.94.144/24 VTLAW-1 C:US 1:560 2:701(136) 3:701(134)
199.94.145/24 VTLAW-2 C:US 1:560 2:701(136) 3:701(134)
199.103.136/23 NET-TERRANET C:US 1:1240 2:1800 3:1239
199.103.210/24 NET-TERRANET C:US 1:1239 2:1800 3:1240
199.164.179/24 TSCS-CBLK C:US 1:1800 2:1240 3:1239
199.164.181/24 TSCS-CBLK C:US 1:1800 2:1240 3:1239
199.164.182/24 TSCS-CBLK C:US 1:1800 2:1240 3:1239
199.172.5/24 APO C:US 1:701(136) 2:701(134)
199.172.144/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.145/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.146/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.147/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.148/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.149/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.150/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.151/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.152/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.153/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.154/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.155/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.156/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.157/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.158/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.159/24 CYCLEDATA-NETBLOCK C:US 1:701(136) 2:701(134)
199.172.165/24 YOURNET-NET C:US 1:701(136) 2:701(134)
199.172.176/24 GORACLE-NET C:US 1:701(136) 2:701(134)
199.173.0/24 INTAC64 C:US 1:701(136) 2:701(134)
199.173.13/24 INTAC64 C:US 1:701(136) 2:701(134)
199.173.153/24 NET-GULFNET C:KW 1:701(136) 2:701(134)
199.173.156/24 FRUS-NET C:US 1:701(136) 2:701(134)
199.173.157/24 VOCAL-NET C:US 1:701(136) 2:701(134)
199.173.176/24 IXL-NET C:US 1:701(136) 2:701(134)
199.173.177/24 IXL-NET C:US 1:701(136) 2:701(134)
199.173.224/24 NET-SSA-NET C:US 1:701(136) 2:701(134)
199.173.225/24 NET-SSA-NET C:US 1:701(136) 2:701(134)
199.173.226/24 NET-SSA-NET C:US 1:701(136) 2:701(134)
199.173.227/24 NET-SSA-NET C:US 1:701(136) 2:701(134)
199.173.228/24 NET-SSA-NET C:US 1:701(136) 2:701(134)
199.173.229/24 NET-SSA-NET C:US 1:701(136) 2:701(134)
199.173.230/24 NET-SSA-NET C:US 1:701(136) 2:701(134)
199.173.231/24 NET-SSA-NET C:US 1:701(136) 2:701(134)
199.174.43/24 NET-SMMC C:US 1:1240 2:1800 3:1239
199.174.44/24 NET-SMMC C:US 1:1240 2:1800 3:1239
199.175.103/24 NCP.BC.CA C:CA 1:1331
199.222.80/24 DIALOG-NETS C:US 1:1321
199.222.81/24 DIALOG-NETS C:US 1:1321
199.222.82/24 DIALOG-NETS C:US 1:1321
199.222.83/24 DIALOG-NETS C:US 1:1321
199.222.84/24 DIALOG-NETS C:US 1:1321
199.222.85/24 DIALOG-NETS C:US 1:1321
199.222.86/24 DIALOG-NETS C:US 1:1321
199.222.87/24 DIALOG-NETS C:US 1:1321
199.242.69/24 NET-OSCA C:US 1:1800 2:1240
199.245.73/24 TIMS-NET C:US 1:3830
199.254.144/24 SPL-1 C:US 1:2149 2:174
200.9.177/24 NETBLK-RED-IDESSA C:CL 1:86 2:279
200.9.178/24 NETBLK-RED-IDESSA C:CL 1:86 2:279
200.9.179/24 NETBLK-RED-IDESSA C:CL 1:86 2:279
200.12.130/24 UMAYOR C:CL 1:86 2:279
202.0.75/24 NET-HACK-N-SPLAT1 C:AU 1:372 2:297
202.38.138/24 ITP-CN C:CN 1:3429 2:293(144) 3:293(145)
202.240.65/24 ASIJNET C:JP 1:1240 2:1800
202.240.70/24 INFONET2 C:JP 1:1240 2:1800
202.240.108/24 MWCNET2 C:JP 1:1240 2:1800
202.240.109/24 MWCNET2 C:JP 1:1240 2:1800
202.251.180/22 FUKUIMEDINET1 C:JP 1:1240 2:1800
202.253.108/22 THICNET C:JP 1:1240 2:1800
204.4.107/24 PSINET-C4 C:US 1:2149 2:174
204.4.232/24 PSINET-C4 C:US 1:2149 2:174
204.4.233/24 PSINET-C5 C:US 1:2149 2:174
204.4.234/24 PSINET-C6 C:US 1:2149 2:174
204.4.235/24 PSINET-C7 C:US 1:2149 2:174
204.4.236/24 PSINET-C8 C:US 1:2149 2:174
204.4.237/24 PSINET-C9 C:US 1:2149 2:174
204.4.238/24 PSINET-C10 C:US 1:2149 2:174
204.4.239/24 PSINET-C11 C:US 1:2149 2:174
204.4.240/24 PSINET-C12 C:US 1:2149 2:174
204.4.241/24 PSINET-C13 C:US 1:2149 2:174
204.4.242/24 PSINET-C14 C:US 1:2149 2:174
204.4.243/24 PSINET-C15 C:US 1:2149 2:174
204.4.244/24 PSINET-C16 C:US 1:2149 2:174
204.4.245/24 PSINET-C17 C:US 1:2149 2:174
204.4.246/24 PSINET-C18 C:US 1:2149 2:174
204.4.247/24 PSINET-C19 C:US 1:2149 2:174
204.50.17/24 NET-WORLDTEL C:CA 1:1240 2:1800 3:1239
204.50.18/24 NET-WORLDTEL C:CA 1:1240 2:1800 3:1239
204.69.218/24 TIMS-NET C:US 1:3830
204.69.219/24 TIMS-NET C:US 1:3830
204.69.220/24 TIMS-NET C:US 1:3830
204.77.0/20 NET-UWSI C:US 1:1240 2:1800 3:1239
204.87.0/19 FERC C:US 1:86 2:279
204.95.0/18 NETBLK-NETBLK-MCS-C C:US 1:1239 2:1800 3:1240 4:3830
204.96.170/23 NET-WORLDLINK C:US 1:1240 2:1800 3:1239
204.96.208/24 SPRINT-CC60D3 C:US 1:1800 2:1240 3:1239
204.96.209/24 SPRINT-CC60D3 C:US 1:1800 2:1240 3:1239
204.96.210/23 SPRINT-CC60D3 C:US 1:1800 2:1240 3:1239
204.97.75/24 NET-DAISY C:US 1:1239 2:1800 3:1240
204.97.208/22 NET-MHRCC C:US 1:1239 2:1800 3:1240
204.119.208/20 NET-UNDERGROUND C:US 1:1240 2:1800 3:1239
204.142.1/24 NETBLK-NJNG16 C:US 1:97
204.142.2/24 NETBLK-NJNG16 C:US 1:97
204.142.3/24 NETBLK-NJNG16 C:US 1:97
204.142.4/24 NETBLK-NJNG16 C:US 1:97
204.142.5/24 NETBLK-NJNG16 C:US 1:97
204.142.6/24 NETBLK-NJNG16 C:US 1:97
204.142.7/24 NETBLK-NJNG16 C:US 1:97
204.142.8/24 NETBLK-NJNG16 C:US 1:97
204.142.9/24 NETBLK-NJNG16 C:US 1:97
204.142.10/24 NETBLK-NJNG16 C:US 1:97
204.142.11/24 NETBLK-NJNG16 C:US 1:97
204.142.12/24 NETBLK-NJNG16 C:US 1:97
204.142.13/24 NETBLK-NJNG16 C:US 1:97
204.142.14/24 NETBLK-NJNG16 C:US 1:97
204.142.15/24 NETBLK-NJNG16 C:US 1:97
204.142.16/24 NETBLK-NJNG16 C:US 1:97
Deletions:
--192.57.69/24 NET-AMERITECHNAP C:US 1:2884
--199.172.120/24 HLINE-NET C:US 1:701(136) 2:701(134)
--199.172.121/24 HLINE-NET C:US 1:701(136) 2:701(134)
--199.172.122/24 HLINE-NET C:US 1:701(136) 2:701(134)
--199.172.123/24 HLINE-NET C:US 1:701(136) 2:701(134)
--199.172.124/24 HLINE-NET C:US 1:701(136) 2:701(134)
--199.172.125/24 HLINE-NET C:US 1:701(136) 2:701(134)
--199.172.126/24 HLINE-NET C:US 1:701(136) 2:701(134)
--199.172.127/24 HLINE-NET C:US 1:701(136) 2:701(134)
--199.222.80/21 DIALOG-NETS C:US 1:1321
Expanded listing, sorted by country, then by organization:
==========================================================
Australia
---------
Miff's Hack'n'Splat Workshop, 5/63 Price Avenue, Clapham, SA, 5062,
AUSTRALIA
1:372 Nasa Science Network (FIX-West)
2:297 Nasa Science Network (FIX-East)
-----------
202.0.75/24 NET-HACK-N-SPLAT1 (AU)
Sly & Weigall, Level 25, 385 Bourke Street, Melbourne, Victoria 3000,
AUSTRALIA
1:372 Nasa Science Network (FIX-West)
2:297 Nasa Science Network (FIX-East)
--------------
192.190.208/21 SW-NET (AU)
Canada
------
BCTEL Advanced Communications, Suite 2600-4720 Kingsway Ave., Burnaby,
British Columbia, V5H-4N2, CANADA
1:1331 ANS Seattle - DNSS 91
--------------
192.197.172/24 BCTEL.NET (CA)
Babillard Synapse Inc., 22 Beloeil, Gatineau, QC J8T 7G3, CANADA
1:2493(35) FONOROLA-EAST
2:2493(91) FONOROLA-EAST
------------
199.84.52/24 SYNAPSE2-DOM (CA)
199.84.53/24 SYNAPSE2-DOM (CA)
199.84.54/24 SYNAPSE2-DOM (CA)
Northern Computer Products, 2179 11th Ave., Vernon, British Columbia,
V1T-8V7, CANADA
1:1331 ANS Seattle - DNSS 91
--------------
199.175.103/24 NCP.BC.CA (CA)
World Wide Telephonic, 555 West Hastings Street, Vancouver, British
Columbia, V6B 4N5, CANADA
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
3:1239 SprintLink
------------
204.50.17/24 NET-WORLDTEL (CA)
204.50.18/24 NET-WORLDTEL (CA)
fONOROLA Inc., Suite 205, Ottawa, Ontario, K1P - 6M1, CANADA
1:2493(35) FONOROLA-EAST
2:2493(91) FONOROLA-EAST
-------------
198.53.180/24 NETBLK-FONOROLA (CA)
198.53.181/24 NETBLK-FONOROLA (CA)
198.53.182/24 NETBLK-FONOROLA (CA)
198.53.183/24 NETBLK-FONOROLA (CA)
Chile
-----
IDESSA, Santa Rosa 76 Santiago Chile, Santiago, CHILE
1:86 SURANET Regional Network (College Park)
2:279 SURANET Regional Network (Georgia Tech)
------------
200.9.178/24 NETBLK-RED-IDESSA (CL)
200.9.179/24 NETBLK-RED-IDESSA (CL)
IDESSA, Santa Rosa 76, Santiago, CHILE
1:86 SURANET Regional Network (College Park)
2:279 SURANET Regional Network (Georgia Tech)
------------
200.9.177/24 NETBLK-RED-IDESSA (CL)
UNIVERSIDAD MAYOR, Americo Vespucio Sur 357, Santiago, CHILE
1:86 SURANET Regional Network (College Park)
2:279 SURANET Regional Network (Georgia Tech)
-------------
200.12.130/24 UMAYOR (CL)
China
-----
ISTIC, State Science & Technology Commission, Computer Center, 15,
Fuxinglu, BEIJING, 100038, CHINA
1:3429 ESNET-AS-5
2:293(144) Energy Science Network (ESnet)
3:293(145) Energy Science Network (ESnet)
------------
168.160.0/24 SSTC-ISTIC (CN)
The Institute of Theoretical Physics, P. O. Box 2735, BEIJING, 100080,
CHINA
1:3429 ESNET-AS-5
2:293(144) Energy Science Network (ESnet)
3:293(145) Energy Science Network (ESnet)
-------------
202.38.138/24 ITP-CN (CN)
Finland
-------
Ammattikasvatushallinnon koulutuskeskus, P.O.Box 249, FI-33101 Tampere,
FINLAND
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
--------------
193.185.109/24 AKH-TRE-NET (FI)
Bitfield Oy, Ukonvaaja 2, FI-02130 Espoo, FINLAND
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
-------------
193.65.109/24 BITFIELD1 (FI)
DECUS Finland, Niittymaentie 7, FI-02200 ESPOO, FINLAND
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
-------------
193.65.108/24 DECUS1 (FI)
Infosto Oy, PO Box 66, FI-33211 Tampere, FINLAND
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
-------------
193.65.102/24 INFOSTO1 (FI)
193.65.103/24 INFOSTO2 (FI)
Relatech Ltd, Jyvaskyla, FINLAND
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
-------------
193.65.110/24 AMI1 (FI)
193.65.111/24 AMI2 (FI)
France
------
Capital Futures Management, 109-111 rue Victor Hugo, F-92532 Levallois
Perret, FRANCE
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
--------------
193.104.162/24 FR-CFM (FR)
INTEROP Europe, 10, rue Thierry Le Luron, F-92593- Levallois Perret CEDEX,
FRANCE
1:1800 ICM-Atlantic
2:1240 ICM-Pacific
3:1133 CERN/DANTE
4:1674 CERN/DANTE
-------------
193.202.75/24 INTEROP-EU2 (FR)
Internet and WWW Service Provider, 204 Bd Bineau, 92200 Neuilly, FRANCE
1:1800 ICM-Atlantic
2:1240 ICM-Pacific
3:1133 CERN/DANTE
4:1674 CERN/DANTE
-----------
194.98.1/24 IWAY-NET (FR)
194.98.2/24 IWAY-NET (FR)
194.98.3/24 IWAY-NET (FR)
194.98.4/24 IWAY-NET (FR)
194.98.5/24 IWAY-NET (FR)
194.98.6/24 IWAY-NET (FR)
194.98.7/24 IWAY-NET (FR)
Renault DDI, service 1602, 860 Quai de Stalingrad, F-92109 Boulogne CEDEX,
FRANCE
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
--------------
193.104.220/24 FR-RENAULT-DN (FR)
Societe Generale, 62 rue de la Chaussee d'Antin, F-75009 Paris, FRANCE
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
--------------
193.105.146/24 FR-SOC-GEN17 (FR)
Germany
-------
Connect GmbH, Industriestr. 15a, D-63517 Rodenbach, GERMANY
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
-------------
193.96.238/24 MUS (DE)
m+s elektronik GmbH, Abteilung, Nordring 55-57, 63843 Niedernberg, GERMANY
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
------------
193.101.4/24 MUS-DE (DE)
193.101.5/24 MUS-DE (DE)
Italy
-----
Diakron S.p.A., Viale Isonzo 25, I-20135 Milano, ITALY
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
--------------
192.106.148/24 DIAKRON-NET (IT)
Ente per le Nuove tecnologie, Energia e Ambiente, Via Vitaliano Brancati,
48, Roma, I-00144, ITALY
1:293(144) Energy Science Network (ESnet)
2:293(145) Energy Science Network (ESnet)
3:1133 CERN/DANTE
4:1674 CERN/DANTE
-------------
192.107.91/24 ENEA-DISP-IP1 (IT)
Japan
-----
Fukui Medical School, 23-3, Shimoaizuki, Matsuoka-cho, Yoshida-gun, Fukui,
910-11, JAPAN
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
--------------
202.251.180/22 FUKUIMEDINET1 (JP)
InfoCom Research, Inc., 1-12-31, Minamiaoyama, Minato-ku, Tokyo 107, JAPAN
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
-------------
202.240.70/24 INFONET2 (JP)
Musashino Women's College, 1-1-20, Shin-machi, Hoya-shi, Tokyo 202, JAPAN
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
--------------
202.240.108/24 MWCNET2 (JP)
202.240.109/24 MWCNET2 (JP)
Packet Radio User's Group, 4-5 Yamazakikaizukachou, Noda-shi, Chiba, 278,
JAPAN
1:701(136) Alternet
2:701(134) Alternet
----------
133.168/16 PRUGNET (JP)
Teikyo University, 2-11-21, Kaga, Itabashi-ku, Tokyo 173, JAPAN
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
--------------
202.253.108/22 THICNET (JP)
The American School in Japan, 1-1-1 Nomizu Chofu-shi, Tokyo 182, JAPAN
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
-------------
202.240.65/24 ASIJNET (JP)
Tokyo Metropolitan Institute of Technology, 6-6, Asahigaoka, Hino, Tokyo
191, JAPAN
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
---------
133.10/16 TMIT-NET (JP)
Kuwait
------
Gulfnet Kuwait, Al-Gas Tower, 11th Floor, Ahmed Al-Jaber Street, Sharq
Area, KUWAIT
1:701(136) Alternet
2:701(134) Alternet
--------------
199.173.153/24 NET-GULFNET (KW)
Mexico
------
Universidad de Quintana Roo, Boulevard Bahia con I. Comonfort, s/n. Col. de
l Bosque, Chetumal, Quintana Roo,
77019, MEXICO
1:278 Mexican Networks at NCAR
2:1328 ANS Houston - DNSS 67
--------------
192.100.164/24 UQROO (MX)
Russian Federation
------------------
JV Summit Systems, 10, 7/2, Vorotnikovsky per., Moscow, 103006,
RUSSIAN FEDERATION
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
3:1239 SprintLink
------------
194.67.22/24 SUMMIT (RU)
Slovenia
--------
SDK, Cankarjeva 18, Ljubljana 61000, SLOVENIA
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
----------
163.159/16 SI-SDK-NET (SI)
United Arab Emirates
--------------------
Emirates Center for Strategic Studies & Research, PO Box 4567, Abu Dhabi,
UNITED ARAB EMIRATES
1:701(136) Alternet
2:701(134) Alternet
3:1800 ICM-Atlantic
-------------
193.242.94/24 ECSSR (AE)
United States
-------------
ASQC, 611 E Wisconsin Ave. PO Box 3005, Milwaukee, WI 53201-3005, USA
1:2149 PSINET-2
2:174 NYSERNet Regional Network / PSI
------------
204.4.107/24 PSINET-C4 (US)
American Society of Appraisers, 535 Herndon Pkwy Suite 150, Herndon, VA
22070, USA
1:701(136) Alternet
2:701(134) Alternet
------------
199.172.5/24 APO (US)
Arena Electronics, Inc., 3079 Crossing Park, Norcross, GA 30071, USA
1:279 SURANET Regional Network (Georgia Tech)
2:86 SURANET Regional Network (College Park)
-------------
198.79.122/24(U) IAT-NORCROSS-GA-US-2 (US)
Ballston Metro Center, 901 North Stuart St, Arlington, VA 22203, USA
1:86 SURANET Regional Network (College Park)
2:279 SURANET Regional Network (Georgia Tech)
------------
199.75.51/24(U) SERDP-GOV (US)
Barnes & Noble, 120 5th Ave., New York, NY 10011, USA
1:2551(229) NETCOM
2:2551(136) NETCOM
3:1321 ANS San Francisco - DNSS 11
----------
161.221/16 BARNES-NOBLE (US)
Barry University, 11300 North East 2nd Avenue, Miami Shores, FL 33161-8889,
USA
1:279 SURANET Regional Network (Georgia Tech)
2:86 SURANET Regional Network (College Park)
-------------
199.78.240/21(U) BARRYNET-2 (US)
CMI International, Inc., 30333 Southfield Road, Southfield, MI 48076, USA
1:2149 PSINET-2
2:174 NYSERNet Regional Network / PSI
--------------
198.180.215/24 CMIINT-CMI (US)
CMI-Precision Mold, 51650 County Road 133, Bristol, IN 46507, USA
1:2149 PSINET-2
2:174 NYSERNet Regional Network / PSI
--------------
198.186.167/24 CMIPB-CMI (US)
CMI-Wabash Cast, Inc., 3837 West Mill Street Extended, Wabash, IN 46992,
USA
1:2149 PSINET-2
2:174 NYSERNet Regional Network / PSI
--------------
198.180.191/24 CMIWC-CMI (US)
Cisco Systems, FRANCE
1:1800 ICM-Atlantic
2:1240 ICM-Pacific
3:1133 CERN/DANTE
4:1674 CERN/DANTE
----------
144.254/16 CISCO-SHONET (US)
Cycledata Corporation, 6450 Lusk Blvd. Suite E104, San Diego, CA 92121, USA
1:701(136) Alternet
2:701(134) Alternet
--------------
199.172.144/24 CYCLEDATA-NETBLOCK (US)
199.172.145/24 CYCLEDATA-NETBLOCK (US)
199.172.146/24 CYCLEDATA-NETBLOCK (US)
199.172.147/24 CYCLEDATA-NETBLOCK (US)
199.172.148/24 CYCLEDATA-NETBLOCK (US)
199.172.149/24 CYCLEDATA-NETBLOCK (US)
199.172.150/24 CYCLEDATA-NETBLOCK (US)
199.172.151/24 CYCLEDATA-NETBLOCK (US)
199.172.152/24 CYCLEDATA-NETBLOCK (US)
199.172.153/24 CYCLEDATA-NETBLOCK (US)
199.172.154/24 CYCLEDATA-NETBLOCK (US)
199.172.155/24 CYCLEDATA-NETBLOCK (US)
199.172.156/24 CYCLEDATA-NETBLOCK (US)
199.172.157/24 CYCLEDATA-NETBLOCK (US)
199.172.158/24 CYCLEDATA-NETBLOCK (US)
199.172.159/24 CYCLEDATA-NETBLOCK (US)
Data Highway, Inc., 256 Broad Avenue, Palisades Park, NJ 07650, USA
1:701(136) Alternet
2:701(134) Alternet
------------
199.173.0/24 INTAC64 (US)
199.173.13/24 INTAC64 (US)
Dialog, 3460 Hillview Avenue, Palo Alto, CA 94304, USA
1:1321 ANS San Francisco - DNSS 11
-------------
199.222.80/24 DIALOG-NETS (US)
199.222.81/24 DIALOG-NETS (US)
199.222.82/24 DIALOG-NETS (US)
199.222.83/24 DIALOG-NETS (US)
199.222.84/24 DIALOG-NETS (US)
199.222.85/24 DIALOG-NETS (US)
199.222.86/24 DIALOG-NETS (US)
199.222.87/24 DIALOG-NETS (US)
Federal Energy Regulatory Commission, 941 North Capitol Street NE,
Washington, DC 20426, USA
1:86 SURANET Regional Network (College Park)
2:279 SURANET Regional Network (Georgia Tech)
------------
199.75.48/24(U) FERC-FED-US (US)
204.87.0/19 FERC (US)
Firewalls R US, PO Box 35567, Monte Sereno, CA 95030, USA
1:701(136) Alternet
2:701(134) Alternet
--------------
199.173.156/24 FRUS-NET (US)
Florida Supreme Court, Supreme Court Bldg., Tallahassee, FL 32399, USA
1:1800 ICM-Atlantic
2:1240 ICM-Pacific
-------------
199.242.69/24 NET-OSCA (US)
Hillsborough Community College, P.O. Box 5096, Tampa, FL 33675-5096, USA
1:1800 ICM-Atlantic
2:1240 ICM-Pacific
3:1239 SprintLink
-------------
204.96.208/24 SPRINT-CC60D3 (US)
Internet Exchange Ltd., 5 Commonwealth Road, Natick, MA 01760, USA
1:701(136) Alternet
2:701(134) Alternet
--------------
199.173.176/24 IXL-NET (US)
199.173.177/24 IXL-NET (US)
Kenneth W.Richardson,Inc., 8320 Stevens Road, Owings, MD 20736, USA
1:86 SURANET Regional Network (College Park)
2:279 SURANET Regional Network (Georgia Tech)
-------------
198.77.131/24(U) FITTEST-CAL-MD-US (US)
M.E.A.G. Systems Operations, 1470 Riveredge Pkwy. N.W., Atlanta, GA 30328,
USA
1:2149 PSINET-2
2:174 NYSERNet Regional Network / PSI
------------
204.4.232/24 PSINET-C4 (US)
204.4.233/24 PSINET-C5 (US)
204.4.234/24 PSINET-C6 (US)
204.4.235/24 PSINET-C7 (US)
204.4.236/24 PSINET-C8 (US)
204.4.237/24 PSINET-C9 (US)
204.4.238/24 PSINET-C10 (US)
204.4.239/24 PSINET-C11 (US)
204.4.240/24 PSINET-C12 (US)
204.4.241/24 PSINET-C13 (US)
204.4.242/24 PSINET-C14 (US)
204.4.243/24 PSINET-C15 (US)
204.4.244/24 PSINET-C16 (US)
204.4.245/24 PSINET-C17 (US)
204.4.246/24 PSINET-C18 (US)
204.4.247/24 PSINET-C19 (US)
M/A-COM Telecommunications, 3033 Science Park Road, San Diego, CA 92121,
USA
1:1740 CERFnet
------------
192.31.74/24 GSD-PCNET (US)
MCSNet, 1300 W. Belmont, Chicago, IL 60657, USA
1:1239 SprintLink
2:1800 ICM-Atlantic
3:1240 ICM-Pacific
4:3830 Net99
--------------
198.160.147/24 NETBLK-NETBLK-MCS-C (US)
199.3.10/23 NETBLK-NETBLK-MCS-C (US)
199.3.32/19 NETBLK-NETBLK-MCS-C (US)
199.3.160/19 NETBLK-NETBLK-MCS-C (US)
204.95.0/18 NETBLK-NETBLK-MCS-C (US)
NCTC, NAVTASC, BG 31. Redlind Drive, Cheltenham, MD 20397, USA
1:19 Milnet (FIX-East)
2:568 Milnet (FIX-West)
----------
138.184/16 CISCO-BLOCK30 (US)
NYNEX Mobile Communications, 2000 Corporate Drive, Orangeburg, NY 10862,
USA
1:1324(32) ANS New York City - DNSS 35
2:1324(35) ANS New York City - DNSS 35
--------------
198.224.128/24 NETBLK-CDPD-CBLK2 (US)
198.224.131/24 NETBLK-CDPD-CBLK2 (US)
New Jersey Natural Gas, 1415 Wyckoff Road, Wall, NJ 07719, USA
1:97 JvNCnet Regional Network
------------
204.142.1/24 NETBLK-NJNG16 (US)
204.142.2/24 NETBLK-NJNG16 (US)
204.142.3/24 NETBLK-NJNG16 (US)
204.142.4/24 NETBLK-NJNG16 (US)
204.142.5/24 NETBLK-NJNG16 (US)
204.142.6/24 NETBLK-NJNG16 (US)
204.142.7/24 NETBLK-NJNG16 (US)
204.142.8/24 NETBLK-NJNG16 (US)
204.142.9/24 NETBLK-NJNG16 (US)
204.142.10/24 NETBLK-NJNG16 (US)
204.142.11/24 NETBLK-NJNG16 (US)
204.142.12/24 NETBLK-NJNG16 (US)
204.142.13/24 NETBLK-NJNG16 (US)
204.142.14/24 NETBLK-NJNG16 (US)
204.142.15/24 NETBLK-NJNG16 (US)
204.142.16/24 NETBLK-NJNG16 (US)
OpenNet Technologies, Inc., 2262 Glenmoor Road North, Clearwater, FL 34624,
USA
1:1800 ICM-Atlantic
2:1240 ICM-Pacific
3:1239 SprintLink
-------------
204.96.209/24 SPRINT-CC60D3 (US)
Oracle Corporation, 3 Bethesda Metro Center, Suite 1400, Bethesda, MD
20814, USA
1:701(136) Alternet
2:701(134) Alternet
--------------
199.172.176/24 GORACLE-NET (US)
Organization of American States, 1889 F Street, Washington, DC 20006, USA
1:86 SURANET Regional Network (College Park)
2:279 SURANET Regional Network (Georgia Tech)
------------
199.75.96/21(U) OAS-ORG (US)
199.75.104/22(U) OAS-ORG (US)
PR NEWSWIRE, 806 HARBORSIDE FINACIAL CNTER., JERSEY CITY, NJ 07311, USA
1:1321 ANS San Francisco - DNSS 11
2:2386 INS-AS
--------------
198.184.184/24 PRNEWSWIRE (US)
PacketWorks, Inc., 1100 Cleveland Street, Clearwater, FL 34615, USA
1:1800 ICM-Atlantic
2:1240 ICM-Pacific
3:1239 SprintLink
-------------
204.96.210/23 SPRINT-CC60D3 (US)
Raytheon, Building 8-A, B-Site, Forrestall Campus, Princeton Univ.,
Princeton, NJ 08540, USA
1:97 JvNCnet Regional Network
--------------
198.139.123/24 NETBLK-RAYTHEON (US)
198.139.124/24 NETBLK-RAYTHEON (US)
Richland County Public Library, 1431 Assembly Street, Columbia, SC
29201-3101, USA
1:279 SURANET Regional Network (Georgia Tech)
2:86 SURANET Regional Network (College Park)
-------------
199.78.124/24(U) RICHLAND-LIB-SC-US (US)
ST. MARY MEDICAL CENTER, 401 WEST POPLAR, P.O. BOX 1477, WALLA WALLA, WA
99362, USA
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
3:1239 SprintLink
-------------
199.174.43/24 NET-SMMC (US)
199.174.44/24 NET-SMMC (US)
Santa Rosa County School Board, 603 Canal Street, Annex Building, Milton,
FL 32570, USA
1:279 SURANET Regional Network (Georgia Tech)
2:86 SURANET Regional Network (College Park)
-----------
199.77.0/19(U) SANTAROSA-K12-FL-US (US)
Sanyo ICON, 10489 Edinburgh Drive, Highland, UT 84003, USA
1:1321 ANS San Francisco - DNSS 11
2:2551(229) NETCOM
3:2551(136) NETCOM
------------
192.41.10/23 ICON-CNETS (US)
Sharon Public Library, 960 Turnpike Street, Canton, MA 02021, USA
1:2149 PSINET-2
2:174 NYSERNet Regional Network / PSI
--------------
199.254.144/24 SPL-1 (US)
Sikorsky Aircraft, 1201 South Avenue, Bridgeport, CT 06604, USA
1:1239 SprintLink
2:1800 ICM-Atlantic
3:1240 ICM-Pacific
------------
204.97.75/24 NET-DAISY (US)
Social Security Administration, 6401 Security Blvd., Rm 297, Baltimore, MD
21235, USA
1:701(136) Alternet
2:701(134) Alternet
--------------
199.173.224/24 NET-SSA-NET (US)
199.173.225/24 NET-SSA-NET (US)
199.173.226/24 NET-SSA-NET (US)
199.173.227/24 NET-SSA-NET (US)
199.173.228/24 NET-SSA-NET (US)
199.173.229/24 NET-SSA-NET (US)
199.173.230/24 NET-SSA-NET (US)
199.173.231/24 NET-SSA-NET (US)
Southern Regional Research Center, 1100 Robert E. Lee Blvd., New Orleans,
LA 70179, USA
1:279 SURANET Regional Network (Georgia Tech)
2:86 SURANET Regional Network (College Park)
-------------
199.78.118/23(U) SRRC-USDA-GOV (US)
TerraNet, Inc., 729 Boylston Street, Floor 5, Boston, MA 02116, USA
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
3:1239 SprintLink
--------------
199.103.136/23 NET-TERRANET (US)
TerraNet, Inc., 729 Boylston Street, Floor 5, Boston, MA 02116, USA
1:1239 SprintLink
2:1800 ICM-Atlantic
3:1240 ICM-Pacific
--------------
199.103.210/24 NET-TERRANET (US)
The Internet Mainstreet, 27466 Sunrise Road, Los Altos Hills, CA 94022, USA
1:3830 Net99
-------------
199.245.73/24 TIMS-NET (US)
204.69.218/24 TIMS-NET (US)
204.69.219/24 TIMS-NET (US)
204.69.220/24 TIMS-NET (US)
The Kevric Company Inc., 8401 Colesville Road, Silver Spring, MD 20910, USA
1:86 SURANET Regional Network (College Park)
2:279 SURANET Regional Network (Georgia Tech)
-------------
198.77.136/24(U) KEVRIC-MO-MD-US (US)
Total Support Computer Systems, 4421 North Church Avenue, Tampa, FL
33614-7015, USA
1:1800 ICM-Atlantic
2:1240 ICM-Pacific
3:1239 SprintLink
--------------
199.164.179/24 TSCS-CBLK (US)
199.164.181/24 TSCS-CBLK (US)
199.164.182/24 TSCS-CBLK (US)
Trellis, 225 Turnpike Road, Southboro, MA 01772, USA
1:560 NEARnet Regional Network
2:701(136) Alternet
3:701(134) Alternet
-------------
199.92.204/24 TRELLIS (US)
Turner Broadcasting System Inc., 1050 Techwood Drive, Atlanta, GA 30318,
USA
1:1322 ANS Los Angeles - DNSS 19
--------------
157.166.161/24 TECHWOOD (US)
157.166.204/24 TECHWOOD (US)
Underground Network, 6409 Primrose Avenue, Suite #1, Los Angeles, CA 90068,
USA
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
3:1239 SprintLink
--------------
204.119.208/20 NET-UNDERGROUND (US)
United States Telephone Association, 1401 H Street #600, Washington, DC
20005, USA
1:86 SURANET Regional Network (College Park)
2:279 SURANET Regional Network (Georgia Tech)
------------
199.75.50/24(U) USTA-ORG (US)
United Wisconsin Services, Inc, 401 W. Michigan St., Milwaukee, WI 53203,
USA
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
3:1239 SprintLink
-----------
204.77.0/20 NET-UWSI (US)
University of Colorado Hospital Authority, 4200 East Ninth Avenue, Denver,
CO 80262, USA
1:209 Westnet Regional Network (Colorado Attachment) - ENSS 141
2:210 Westnet Regional Network (Utah Attachment) - ENSS 142
----------
168.200/16 UHCOLORADO (US)
Utica College of Syracuse U., 1600 Burrstone Road, Utica, NY 13502, USA
1:1239 SprintLink
2:1800 ICM-Atlantic
3:1240 ICM-Pacific
-------------
204.97.208/22 NET-MHRCC (US)
Vermont Law School, P.O. Box 96 Chelsea Street, South Royalton, VT 05068,
USA
1:560 NEARnet Regional Network
2:701(136) Alternet
3:701(134) Alternet
-------------
199.94.144/24 VTLAW-1 (US)
199.94.145/24 VTLAW-2 (US)
Vocal Technologies Limited, 1576 Sweet Hume Road, Buffalo, NY 14228, USA
1:701(136) Alternet
2:701(134) Alternet
--------------
199.173.157/24 VOCAL-NET (US)
Worldlink Communication Services, 10405 Scottsdale Road, Suite 4,
Scottsdale, AZ 85253, USA
1:1240 ICM-Pacific
2:1800 ICM-Atlantic
3:1239 SprintLink
-------------
204.96.170/23 NET-WORLDLINK (US)
Yournet, 18 Langford, Ward, AR 72176, USA
1:701(136) Alternet
2:701(134) Alternet
--------------
199.172.165/24 YOURNET-NET (US)
==========================================================
The following Midlevel/Regional peering sessions have also been added:
AS 1240 - ICM-Pacific (US) - ENSS 144
Peer: 192.203.230.16 - Sprint, Government Systems Division, 13221 Woodland
Park Road, Herndon, VA 22071, USA - FIX-W.ICM.NET
AS 2886 - RS-TEST-AS - ENSS 218
Peer: 198.32.0.10 - Merit Network, Inc., 1071 Beal Ave, Ann Arbor, MI
48109, USA - merlin.avalon.rs.net
==========================================================
AS690 CIDR Squeezings Report: 8929 Nets, 99 ASs, 1859 Aggregates
-----------------------------------------------------------------
8929 (85%) of the ever-announced more-specific routes within aggregates have
been withdrawn. 180 of those were withdrawn within the last week.
98 the week before that.
129 the week before that.
99 ASs have registered aggregates in the PRDB.
94 of those are announcing aggregates.
76 have withdrawn at least one more specific route.
1859 Aggregates are configured.
1555 of these were Top-Level Aggregates (not nested in another aggregate).
1226 of these are being announced to AS690.
966 of those have at least one subnet configured (the other 260 may be saving
the Internet future subnet announcements).
880 have stopped announcing at least one configured more specific route.
863 have stopped announcing half of their configured more specific routes.
805 have stopped announcing most (80%) of their more specific routes.
See merit.edu:pub/nsfnet/cidr/cidr_savings for more detail.
-----------------------------------------------------------
==========================================================
The configuration reports which reflect today's update will be
available for anonymous ftp on nic.merit.edu by 08:00 EDT:
configuration reports --
nic.merit.edu:nsfnet/announced.networks:
as-as.now as-gw.now ans_core.now country.now net-comp.now
nets.doc nets.non-classful nets.tag.now nets.unl.now
NSS routing software configuration files --
nic.merit.edu:nsfnet/backbone.configuration:
gated.nss<NSS number>.t3p
Information is also avaiable through the PRDB whois server. Type
"whois -h prdb.merit.edu help" for details.
--------------------------------------------------------------------------
REPORT CHANGES: (Updated May 31, 1994)
Anyone considering configuring an aggregate into the PRDB (and you all
should be!) is encouraged to pre-check that aggregate by typing the command:
whois -h prdb.merit.edu 'aggchk <agg>'
(where "<agg>" is the aggregate description). This command will list
all of the other entries in the PRDB that are more specific routes of <agg>,
as well as any aggregates all ready configured that contain <agg>. The
output includes the AUP and announcement lists of each of the nets printed,
with discrepancies flagged. This is the same program that we use for
sanity-checking the NACRs that you submit.
The archived discussion list "db-disc(a)merit.edu" exists for discussion of
PRDB issues. Send a message to "db-disc-request(a)merit.edu" to subscribe.
--Dale Johnson (dsj(a)merit.edu)
--------------------------------------------------------------------------
Please send all requests for configuration changes to nsfnet-admin(a)merit.edu
using the NSFNET configuration forms. The forms are available on-line
from the nic.merit.edu machine. Use ftp and the anonymous login to get on the
machine. Do a "cd nsfnet/announced.networks" and get the files template.net,
template.net.README, template.gate, and template.as.
*** Note: As of March 1, 1994, NSFNET AUP NACRs must use the template.net
*** (NACR) version 7.1, or the NACR will be returned unprocessed.
*******************************
--Steve Widmayer Merit/NSFNET skw(a)merit.edu
--Enke Chen Merit/NSFNET enke(a)merit.edu
--Steven J. Richardson Merit/NSFNET sjr(a)merit.edu
1
0
Today's update is done.
--Steve W.
1
0