Come on, Vadim, this is a great idea. They should build this cheaper faster better network to be used exclusively for meritorious traffic because it's in the national interests of the USA. Moreover, the the members of the Internet II consortium (hi University MIS types and management!) and any governmental funding body involved should mandate that any meritorious traffic use IPv6 *only* and set an end-to-end IPv6 option to indicate which traffic should go across Internet II and which should go across whatever IPv6 Internet exists as provided by Evil Commercial Interests at the ridiculously high prices which are being charged universities and research labs. Moreover, they should also mandate that Internet II be built on ATM, use ABR, and also use RSVP to guarantee the kinds of qualities of service that the R&E community requires. This would be a boon to network researchers throughout the USA. You can bet that I (not a U.S. national, and an Evil Greedy Commercial Bastard) also will be singing the praises of the folks from PSU, Stanford and Chicago, and their EDUCOM ally Mike Roberts and their representative George Strawn, the NSF's chief proponent of Internet II-like initiatives, because it's a fantastic idea, and the implication that it is either pork or a bad or unworkable idea is one that is beneath you, Vadim, even if the proposers leave the other requirements out of the final solicitation, award, or both. I mean, you can't possibly believe that the industry will solve or even wants to solve the issues most of the folks at University campuses have been complaining about, just as you shouldn't believe that it can't use a push to deploy ATM, ABR, RSVP and IPv6. A group of some thirty-four or so high-power customers is exactly the sort of push that will finally just get this stuff done. I can't wait until other countries jump on the bandwagon. Maybe the EU will ressurect its similar ideas, or the G7 will keep going with their talks, or maybe this could end up right in the lap of the U.N. That would be way cool. Sean.
On Tue, 8 Oct 1996, Sean Doran wrote:
Come on, Vadim, this is a great idea. They should build this cheaper faster better network to be used exclusively for meritorious traffic because it's in the national interests of the USA.
Moreover, the the members of the Internet II consortium (hi University MIS types and management!) and any governmental funding body involved should mandate that any meritorious traffic use IPv6 *only* and set an end-to-end IPv6 option to indicate which traffic should go across Internet II and which should go across whatever IPv6 Internet exists as provided by Evil Commercial Interests at the ridiculously high prices which are being charged universities and research labs.
Moreover, they should also mandate that Internet II be built on ATM, use ABR, and also use RSVP to guarantee the kinds of qualities of service that the R&E community requires.
Sean, isn't this excessive cruelty to those that have to deal with this in the end? This sort of proposal, i.e. building a Higher Ed private network for research, is in and of itself not such a bad thing. The grow of Internet since NSFNet shut down has put serious strains on the infrastructure that researchy folks used to use to do(and still do) their various work on. Given that the exponential growth of the net is projected to continue, it's not completely baseless to think that the problems we've seen over the last 12 months or so will continue. So if you follow that train of thought, building a private net for "important/meritorious" traffic makes some amount of sense. Now, it must be pointed out that a large part of the problem is in the way overloaded access pipe many of these universities have to various ISPs, placing a fair amount of culpability to the universities themselves. It should also be pointed out that while the basic idea might have some merit, it's highly debatable whether this private network will be worth the investment once this idea goes through the normal academic politics (way too many cooks), ATM-mania, bureaucracy, delays, normal academic shoe-string budget, etc. Hey, at the very least, shoe-string budget network strung together with bubblegums and built-in cumbersome bureaucratic rules and progress decelerator should make it a very interesting thing in a researchy academic sort of a way. -dorian, speaking strictly for himself.
"Dorian R. Kim" writes:
This sort of proposal, i.e. building a Higher Ed private network for research, is in and of itself not such a bad thing.
The grow of Internet since NSFNet shut down has put serious strains on the infrastructure that researchy folks used to use to do(and still do) their various work on.
You know, maybe I'm crazy but I rarely see the troubles that people mention so often. When I'm going between my site and another site on the net, if both ends are unloaded, I typically get bandwidth equal to the smaller of the two pipes into the net. Its very rare that I don't get transfer times near the maximum expected, even when one of the pipes is attached to a mediocre provider. (Really bad providers are another story, but I luckily can usually convince my clients not to use them). Of course, if I'm communicating to a site that seven thousand other people are talking to I'll get low performance, but between my clients on new unloaded connections I rarely see any trouble. Every once in a while someone's one router to somewhere critical melts down and suddenly there is a massive bandwidth shortage over some provider's backbone, but usually (call it at least 23 hours a day) everything is just fine. Now, I'm not satisfied with the overall reliability -- one hour a day of not being able to get to an arbitrarily selected site on the net doesn't seem good at all, especially compared with the phone system, and especially if that one hour happens to be the hour that the new economic statistics just were posted to the commerce department's FTP site and people end up staying late waiting for the thing to get on line so they can start the overnight batch calculations. However, overall, things are pretty good -- I don't see these massive shortages of bandwidth that people are talking about. Seems to me that if the university researchers are sick of competing with the undergrads, either the university could get a fatter pipe, or they could priority queue the traffic from the researchers, and either way they would probably win. Even with all the well-publicized growing pains at the providers, I think the trouble is most likely at the end points, and not in the providers. Am I crazy? Are other people seeing massive bandwidth shortages that I just haven't noticed? (There are some of these occassionally for a week or two on some provider, but they rarely seem to last long.) Perry
There is a "perception" of lack of bandwidth. MAE EAST is running at %30 capacity of a 100Mb switched FDDI. Figure there are about 30 providers each with 45Mb pipe - a bandwidth shortage does not add-up - There are many other metrics that effect performance besides bandwidth. How does Internet II solve the other sources of performance problems? --- Stephen Balbach "Driving the Internet To Work" VP, ClarkNet due to the high volume of mail I receive please quote info@clark.net the full original message in your reply.
There is a "perception" of lack of bandwidth. MAE EAST is running at %30 capacity of a 100Mb switched FDDI. Figure there are about 30 providers each with 45Mb pipe - a bandwidth shortage does not add-up - There are many other metrics that effect performance besides bandwidth. How does Internet II solve the other sources of performance problems?
--- Stephen Balbach "Driving the Internet To Work" VP, ClarkNet due to the high volume of mail I receive please quote info@clark.net the full original message in your reply.
The bandwidth issue IS a problem. If you look closely at the stats you can see that traffic curves tend to flatten out far below the theoretical limit. A number of factors that cause this are related to I/O limits by major peers at the meet points. -- /*Joseph T. Klein * Keep Cool, but Don't Freeze * NAP.NET, LLC * * phone +1 414 747-8747 * - Hellman's Mayonnaise * http://www.nap.net */
I suspect that they will be going completely switched and have very few total points on the whole network (depending on what configuration they go to). A network of OC12s or OC48s in a redundant star will have significant performance benefits because 1) no routing, or very symmetric routing. 2) very low latency <8ms coast to coast I'd suspect. 3) priority queues, quality of service, reserved bandwidth, etc. They could also use the BFR from Cisco whenever that comes out, which is supposed to do OC48 speeds :) I haven't read the Inet-2 information, so I don't know what kind of trunk lines they are using, but I am pretty sure they want interconnect speeds far faster than even multiple DS3 connects to each other, and they will tolerate whatever internet-1 performance they can get. I am not even touching the Mae-East at 30% fantasy. All I know is that the UUNet <--> Sprint OC3 private connect at Tysons Corner is at better than 24Mbits average and mostly limited to router CPU problems. -Deepak. On Wed, 9 Oct 1996, Stephen Balbach wrote:
There is a "perception" of lack of bandwidth. MAE EAST is running at %30 capacity of a 100Mb switched FDDI. Figure there are about 30 providers each with 45Mb pipe - a bandwidth shortage does not add-up - There are many other metrics that effect performance besides bandwidth. How does Internet II solve the other sources of performance problems?
--- Stephen Balbach "Driving the Internet To Work" VP, ClarkNet due to the high volume of mail I receive please quote info@clark.net the full original message in your reply.
On Wed, 9 Oct 1996, Deepak Jain wrote:
I am not even touching the Mae-East at 30% fantasy. All I know is that the UUNet <--> Sprint OC3 private connect at Tysons Corner is at better than 24Mbits average and mostly limited to router CPU problems.
Isn't this the reason that MAE-East is at 30%, i.e. there are now many private interconnects between tier 1 NSP's to offload traffic from major exchanges like MAE-East? Not to mention that MAE-East is no longer the only major interconnect, a fact that seems to be taking some time to work it's way into net.mythology. Michael Dillon - ISP & Internet Consulting Memra Software Inc. - Fax: +1-604-546-3049 http://www.memra.com - E-mail: michael@memra.com
On Wed, 9 Oct 1996, Deepak Jain wrote: Deepak,
...
I am not even touching the Mae-East at 30% fantasy. All I know is that the UUNet <--> Sprint OC3 private connect at Tysons Corner is at better than ^^^ 24Mbits average and mostly limited to router CPU problems.
It's a DS3 at 29 Mbps ( as of a few seconds ago). FWIW, our DS3s to MAE-E are at about the same load. Clearly, private peerings have diverted a lot of load away from MAE-E and other public exchange points. Jim
On Wed, 9 Oct 1996, Perry E. Metzger wrote:
You know, maybe I'm crazy but I rarely see the troubles that people mention so often.
Am I crazy? Are other people seeing massive bandwidth shortages that I just haven't noticed? (There are some of these occassionally for a week or two on some provider, but they rarely seem to last long.)
That's Perry's view from New York City, now my view from Vernon, British Columbia, Canada. This is a town of 40,000 population in the mountains, 6 hours drive from Vancouver, BC and 7 hours drive from Calgary, Alberta which are the nearest two major cities. My ISP has a 10Mbps fibre ATM circuit to BCNet in Vancouver that was installed in Aplril 1995. From there it connects to CA*Net which has a T3 into MCI Seattle as well as links to the East where more T3's head south to MCI. The T3's were T1's until Sept 1995 at which point there was a small improvement in speed at times. But I have to agree 100% with Perry. Speeds are great most of the time. Congestion is generally something caused "out there" often at WWW server sites. In my case the ISP has ample bandwidth to their provider and the next two upstream links in the chain do a good job. IMHO this is the reason I have such good service. That's why I think most problems that people complain about are due to the ISP's network. In the case of a university, they themselves are the problem. This assumes a model of NSP connected to some sort of regional or large ISP connected to your provider. The NSP's are good, the regionals and large ISP's are usually good and the provider is usually the source of the problems with bandwidth and congestion. But these problems *ARE* fixable. Michael Dillon - ISP & Internet Consulting Memra Software Inc. - Fax: +1-604-546-3049 http://www.memra.com - E-mail: michael@memra.com
On Wed, 9 Oct 1996 12:13:03 -0700 (PDT) Michael Dillon wrote:
My ISP has a 10Mbps fibre ATM circuit to BCNet in Vancouver that was installed in Aplril 1995. From there it connects to CA*Net which has a T3 into MCI Seattle as well as links to the East where more T3's head south to MCI. The T3's were T1's until Sept 1995 at which point there was a small improvement in speed at times.
So a 10Mbps fiber ATM circuit translates, after the cell tax and packet stredding, to 6Mbps on a real link. Is this not slow enough to be a bottleneck? regards, fletcher
On Wed, 9 Oct 1996, Fletcher E Kittredge wrote:
On Wed, 9 Oct 1996 12:13:03 -0700 (PDT) Michael Dillon wrote:
My ISP has a 10Mbps fibre ATM circuit to BCNet in Vancouver that was installed in Aplril 1995. From there it connects to CA*Net which has a T3 into MCI Seattle as well as links to the East where more T3's head south to MCI. The T3's were T1's until Sept 1995 at which point there was a small improvement in speed at times.
So a 10Mbps fiber ATM circuit translates, after the cell tax and packet stredding, to 6Mbps on a real link. Is this not slow enough to be a bottleneck?
It might be for video conferencing but not for ftp, http, smtp, nntp and the various other normal protocols. But then, I don't do video conferencing on the net. The closest I get to that is running some RealAudio stuff occasionally and it works just fine. Michael Dillon - ISP & Internet Consulting Memra Software Inc. - Fax: +1-604-546-3049 http://www.memra.com - E-mail: michael@memra.com
participants (10)
-
David R. Conrad
-
Deepak Jain
-
Dorian R. Kim
-
Fletcher E Kittredge
-
Jim J. Steinhardt
-
Joseph T. Klein
-
Michael Dillon
-
Perry E. Metzger
-
Sean Doran
-
Stephen Balbach