Re: SONET Interconnect (was RE: MCI)
At 12:49 AM 3/29/96 -0500, Shikhar Bajaj wrote:
Doesn't the fact that they recover the investment mean that enough people wanted the product and were willing to pay for it?
Not that I want to beat a dead horse here, but please explain to me how, if you consistently manage to only achieve 70%-80% effective throughput on available bandwidth, you can recoup your investment from subscribers who are expecting more? Somehow, this sounds quirky to me. ,-) - paul
Gentlemen, I was reading this thread with interest and perhaps I have missed something...... but here is what comes to mind; As pointed out by Zhang et. al. in the late 80's, real-time applications over a store-and-forward datagram service is very problematic because IP is a best-effort service and there are no bandwidth/delay guarantees in the IP paradigm. ST-II build another unreliable service (with a smaller IPv5 header) and created an odd-ball, no tool set, ST/SCMP paradigm that continues to be unreliable without addition transport services. Hence, the goal of a real-time QoS for datagrams over a routed topology was not reached. RSVP, building with lessons learned from ST-II and the added IP feature of interdomain multicasting, has a better chance to survive and to become accepted on a wider scale because it is built on IPv4 with existing IPv4 tools and toolsets. But never-the-less RSVP cannot guarantee packet delivery and completely reliable real-time services. Resource reservation does *not* guarantee a time-critical packet arrives from source to sink and requires a transport protocol (like TCP or some other flow control). Therefore, for applications where packet loss is acceptable, RSVP works; but in a real-time distributed internetwork where data MUST be delivered within a specific QoS, RSVP hold little real promise. On the other hand, for many applications of datagram voice and video, packet losses are acceptable so RSVP/IP is acceptable as well. On the other hand, ATM (based on the original ATM charter) was envisioned to offer a guaranteed QoS. As pointed out by numerous people in this thread and before, IP/ATM is not efficient; and it is for this reason and others that many IP protagonists love to bash ATM, and with good reason from an IP perspective. But, will all the cross fire and debate raging and the ATM - IP debate raging the fact remains: There is no current architecture to *guarantee* a close to zero packet loss in the IP world. Do we need a real-time protocol that is highly reliable in the world? The answer, is of course YES. Will a datagram service ever provide the .99999++++ delivery service required by distributed, network based real-time systems? Maybe. But IP based solutions alone will more-than-likely not solve the problem. Finally, as pointed out TCP/IP/ATM is not efficient. On the other hand, transmitting packets, losing them in the network and retransmitting them or just dropping packets from the stream is not the most efficient method of delivery either. Highly efficient, guaranteed packet delivery services with a close to zero packet loss is an oxymoron and does not exist and will not exist with RSVP (but will help but not to .99+++ as required by real-time network services). On the other hand, it is possible that a transport technology that can deliver a guaranteed QoS under IP will work, perhaps inefficiently, but it can deliver real-time data. I am not necessarily a protagonist for ATM, and my personal opinion is that ATM is too complex and tries to be all things to everyone, and that rarely if ever works. But, there are real-time applications where unreliable datagrams with heavy transport protocol overheads similar to TCP/IP are undesirable as well; and this paradigm is not so efficient either. Best Regards, Tim postscript: The reader is invited to read and comment on a draft paper http://www.silkroad.com/working/stii-rsvp.ps related to this subject (but not addressing the ATM issue). +------------------------------------------------------------------------+ | Tim Bass | | | Principal Network Systems Engineer | "... the fates of men are bonded | | The Silk Road Group, Ltd. | one to the other by the cement | | | of wisdom." | | http://www.silkroad.com/ | Milan Kundera | | | | +------------------------------------------------------------------------+
On Fri, 29 Mar 1996 10:04:56 -0500 (EST) Tim Bass wrote:
Do we need a real-time protocol that is highly reliable in the world? The answer, is of course YES. Will a datagram service ever provide the .99999++++ delivery service required by distributed, network based real-time systems? Maybe. But IP based solutions alone will more-than-likely not solve the problem.
I see no "of course." Are there some applications which require this level of service in a WAN? Probably. Are there many? Probably not. Is end-to-end relability and performance more important for the *vast* majority of applications? Yes! I worked on IP over ATM back in 1992-1993 and checked out of the discussion at that time to move on to greener pastures. The preliminary facts said that ATM would be a poor transport for IP. After all these years, ATM still stinks. It still was designed for a set of applications of decreasing significance (i.e. voice traffic). Then people said: "wide spread ATM deployment is 4-5 years in the future." Well, we are 4-5 years in the future now and they still say the same thing. Every day the Internet grows, the likelihood that ATM as a WAN architecture will be successful fades. The only way it was ever going to be successful is if the powerful RBOCs forced it down people's throats. Their ability to do that is weakening as we speak. fletcher
Fletcher replies:
I see no "of course." Are there some applications which require this level of service in a WAN? Probably. Are there many? Probably not. Is end-to-end relability and performance more important for the *vast* majority of applications? Yes!
I'll concede the fact that the number of 'surf the net' applications far exceed the number of real-time systems in the Internet. But, we am working on a WAN project that required real time data delivery every few seconds across the US to numerous sites. Even though the numbers are few *today* they do exist and are growing in number and complexity. In fact, there are numerous applications and system designs just waiting for the 'network to support real-time services.' Just because real-time services are in the minority of datagram services, does not translate to 'the world should not support real-time services'. If that is the logic that is used to make decisions, then let's stop funding libraries because the vast majority get their information from television! Real-time WAN services with concrete .99999+ availability of QoS is one of the growth areas of the next decade, BTW, and is a much differnet service that providing access so 'Joe&Judy surf-the-net' can pull down yet another file. There are numerous applications for real-time datagram delivery systems. ATM may not be the underlying transport, as mentioned; but there is an emerging market for .99999+ datagram services. The average IP provider may never see this market, but believe me, they exists. High regards, Tim
Not that I want to beat a dead horse here, but please explain to me how, if you consistently manage to only achieve 70%-80% effective throughput on available bandwidth, you can recoup your investment from subscribers who are expecting more?
I was talking with David Tennenhouse yesterday about why much of the Internet community hates ATM and he said something that seems very accurate: "Everyone worries about overhead in someone elses layer." It's true: just think of how much TCP/IP overhead we put up with that could be compressed if it was really important. (Not to mention HTTP overhead...) Perhaps more importantly, stepping up fiber bandwidth is a lot easier than improving router speeds. So why does a 15% loss due to ATM really matter? Tennenhouse, btw, is proof that ATM did not come out of the telecom community alone as people like to believe. He was part of the U. of Cambridge ring project (started in the late 70s) that used lightwight VCs and fixed-length packets. A number of people familiar with this project ended up at Bellcore, working on optimizing switch design. It's no suprise that they went with something like ATM that's good for switch efficiency but bad for transmission line efficiency. Fortunately, that was probably the right tradeoff to make. s.
participants (4)
-
Fletcher Kittredge
-
Paul Ferguson
-
Steve Steinberg
-
Tim Bass