On 3/8/11 2:32 PM, Valdis.Kletnieks@vt.edu wrote:
On Tue, 08 Mar 2011 07:37:27 EST, Steven Bellovin said:
No. It was rejected because routers tended to melt down into quivering puddles of silicon from seeing many packets with IP options set -- a fast trip to the slow path. It also requires just as many changes to applications and DNS content, and about as large an addressing plan change as v6. There were more reasons, but they escape me at the moment. Steve, you of all people should remember the other big reason why:
pathalias tended to do Very Bad Things like violating the Principle of Least Surprise if there were two distinct nodes both called 'turtlevax' or whatever. That, and if you think BGP convergence sucks, imagine trying to run pathalias for a net the size of the current Internet. :)
No No. That was Mel Pleasant and me– the RABID REROUTERs. And people weren't all THAT surprised. But beyond that, I've actually done some analysis on doing nearly just that. If you think about it there are about 300,000 entries, and this is not beyond the capacity of an O(nlog(n)) algorithm like, for instance, Dijkstra in a modern world. And before you say, “Ew! SPF for Interdomain”, we had the precise same debate for IGP back in 1990 or so. The only big difference is that exposing of policy in SPF isn't that desirable. And quite frankly the idea has gone around a few times, the one that remains in my head was TRIAD, which was work done by Gritter and Cheriton. Eliot