MAE West congested again?
It looks to me as if MAE West may be congesting again. While things look fine at the moment, earlier I was seeing congestion when transitting between CERFNET (134.24.9.115) and ESNET (198.32.136.41). My understanding of the topology of MAE West is that there is a Gigaswitch at Ames and a Gigaswitch at MFS (in San Jose) connected by a pair of OC-3 ATM circuits. My belief is that CERFNET and ESNET have ports on different Gigaswitches. MFS has changed the web page which lists MAE West connections such that I am unable to easily tell who is on which switch. In looking back at stats for the OC-3 link prior to the addition of the second circuit, I note the circuit maxes out around 70 Mbits/sec. In discussing this around here, we concluded that data is probably clocked out the Gigaswitch at 100 Mbits/sec and ATM overhead accounts for the remaining loss of available bandwidth. Can anyone confirm this for me? The graph for the OC-3 pair from yesterday indicates that it's time to add more bandwidth between switches. I'm real interested in knowing what plans MFS has up its sleeves for alleviating congestion this time around. Provisioning an OC-3 and only burning half the available bandwidth doesn't strike me as a scalable solution. mb
Mark Boolootian writes:
It looks to me as if MAE West may be congesting again. While things look fine at the moment, earlier I was seeing congestion when transitting between CERFNET (134.24.9.115) and ESNET (198.32.136.41).
My understanding of the topology of MAE West is that there is a Gigaswitch at Ames and a Gigaswitch at MFS (in San Jose) connected by a pair of OC-3 ATM circuits. My belief is that CERFNET and ESNET have ports on different Gigaswitches. MFS has changed the web page which lists MAE West connections such that I am unable to easily tell who is on which switch.
In looking back at stats for the OC-3 link prior to the addition of the second circuit, I note the circuit maxes out around 70 Mbits/sec. In discussing this around here, we concluded that data is probably clocked out the Gigaswitch at 100 Mbits/sec and ATM overhead accounts for the remaining loss of available bandwidth. Can anyone confirm this for me?
The graph for the OC-3 pair from yesterday indicates that it's time to add more bandwidth between switches. I'm real interested in knowing what plans MFS has up its sleeves for alleviating congestion this time around. Provisioning an OC-3 and only burning half the available bandwidth doesn't strike me as a scalable solution.
mb
At http://ext2.mfsdatanet.com/MAE/west.map.html you will find a connection map dated Sept 18. I should add that CERFnet has TWO routers on Gigaswitches: one of each side of the MAE ( NASA and MFS-San Jose). ESNET seems to have a router on the MFS side . The CERFnet router you mention is on the MFS-San Jose side as well. The IP address (134.24.9.115) you refer to is the SMDS interface to the CIX on that router. The MAE-FDDI side address of that router 198.32.136.55. What you report seems to suggest congestion in the switch at the MFS side ... If you can share more information about the end to end path you were testing, I could shed more light on it. --pushpendra Pushpendra Mohta pushp@cerf.net +1 619 455 3908
Pushpendra,
At http://ext2.mfsdatanet.com/MAE/west.map.html you will find a connection map dated Sept 18.
This was the map I looked at. What's changed is that it no longer differentiates connections to the NASA Ames and San Jose switches, which I'm pretty sure it used to. I think the map's first column used to read something like "MFS-Giga" and "Ames-Giga" which made it immediately obvious which switch someone was connected to. It isn't so obvious any longer...
If you can share more information about the end to end path you were testing, I could shed more light on it.
Steve Feldman's message explained the source of the congestion, and my question was really motivated by my (apparently erroneous) belief that the pair of OC-3 circuits was congesting. I do gather from Steve's message that data is clocked out of the Gigaswitch ATM interface at 100 Mbits/sec. Thanks for the response Pushpendra. best regards, mb
participants (2)
-
booloo@cats.ucsc.edu
-
Pushpendra Mohta