Re: Transaction Based Settlements Encourage Waste (was Re:
vadim wrote:
In any case, nobody was able to connect backbone flow traces for any significant amount of time. A typical Internet load on OC-3 would produce about 7000 fps, or 500 mil flows per day. Now, crunching that data to generate bills is going to be fun.
But you're not going to do accounting on backbone links to bill customers, you'd do that at their ingress. Now, if you have customers with OC-3's, then for the time being, I agree, you probably can't bill based on flows... But for customers from dial-up speeds up to DS-3's, the technology is there to do this today. Quesion is: is it cost feasible to do so? Sean ___________________________________ Sean Butler, CCIE #3897 IBM Global Services -- OpenNet Support Phone: 8-631-9809, 813-523-7353 Fax: 8-427-5475 813-878-5475 Internet email: sebutler@us.ibm.com
Sean Butler wrote:
vadim wrote:
In any case, nobody was able to connect backbone flow traces for any significant amount of time. A typical Internet load on OC-3 would produce about 7000 fps, or 500 mil flows per day. Now, crunching that data to generate bills is going to be fun.
But you're not going to do accounting on backbone links to bill customers, you'd do that at their ingress. Now, if you have customers with OC-3's, then for the time being, I agree, you probably can't bill based on flows... But for customers from dial-up speeds up to DS-3's, the technology is there to do this today. Quesion is: is it cost feasible to do so?
To asses path-costs you have to do that in backbone. The end points (aka ingress points) simply do not have routing information sufficient to recreate the paths from sources and destinations. Actually, even backbones themselves don't have complete vision of network topology - so the per-flow path costs have to be calculated in a distributed fashion. And no, distributing the topology information to ingress points is not going to work, either - that would amount to routing with no aggregation. Once you realize that the size of the network, and the need to accomodate exponential growth place constraints on what kinds of computations you can do, a lot of things become pretty obvious. I may sound opinionated on the issue - but what i do is simply applying the Internet's equivalent of the second law of thermodynamics, which effectively rules out perpetuum motion without going into specifics on why it is impossible. The per-flow computation complexity in the Internet is at least the same as per-packet, but also necessiates gateways to keep inter-packet state. Which is an O(N) kind of memory (where N is number of end-points), and therefore cannot scale (note that current gateways only need O(log N) memory for RIBs, assuming consistent use of aggregation). --vadim PS Note that in the last paragraph the implicit assumption is that the number of inter-backbone paths is limited (which makes limited number of exchange points to carry most traffic flows). The reason for that is flap replication property of exchanges.
participants (2)
-
Sean Butler
-
Vadim Antonov