Not that I want to beat a dead horse here, but please explain to me how, if you consistently manage to only achieve 70%-80% effective throughput on available bandwidth, you can recoup your investment from subscribers who are expecting more?
I was talking with David Tennenhouse yesterday about why much of the Internet community hates ATM and he said something that seems very accurate: "Everyone worries about overhead in someone elses layer." It's true: just think of how much TCP/IP overhead we put up with that could be compressed if it was really important. (Not to mention HTTP overhead...) Perhaps more importantly, stepping up fiber bandwidth is a lot easier than improving router speeds. So why does a 15% loss due to ATM really matter? Tennenhouse, btw, is proof that ATM did not come out of the telecom community alone as people like to believe. He was part of the U. of Cambridge ring project (started in the late 70s) that used lightwight VCs and fixed-length packets. A number of people familiar with this project ended up at Bellcore, working on optimizing switch design. It's no suprise that they went with something like ATM that's good for switch efficiency but bad for transmission line efficiency. Fortunately, that was probably the right tradeoff to make. s.