The persistent furor over route filtering does make a depressing point: in general, IP cannot be used to provide reliability or load distribution over the public Internet. It seems that every significant ISP enforces filtering on routes that goes beyond the filtering necessary for correctness. The cumulative effect is that the reliability features inherent to IP and dynamic routing protocols cannot be enjoyed by the Routing Underpriviledged -- those organizations that are small enough (or frugal enough) to live with a small IP space. And there is not any real, fundamental reason for this. It's such a common practice that it doesn't seem to get its fair share of critical analysis. (Just when *will* routers be capable?)
From my perspective as a developer, the widespread long-prefix filtering undermines the value of much of the original work done to develop the Internet. Problems that have already been solved at the IP layer have to be solved again at a higher layer; we're likely to find ourselves working in a soup of protocols where a problem may be solved well at one layer, or kludged poorly at several other layers, and in which we're forced to kludge poorly because of antiquated ideas of propriety or efficiency.
I welcome comments. --- Mark R. Lindsey, mark@datasys.net DSS Online Development Group Voice: 912.241.0607, Fax: 912.241.0190 (US)