Re: Verio Decides what parts of the internet to drop
James Smith <jsmith@dxstorm.com> wrote:
Based on past experiences, I would say that the big backbone providers shouldn't do any filtering at all. Then, the lower tiers can do all the filtering they want, and still rely on default routing to send the packets to the backbone. It may not be the prettiest way to route traffic, but this would allow smaller ISPs to filter if they cannot afford buying bigger equipment to hold all the routes. Since the tier-1 guys are the glue of the Internet, they should be required to take everyone routes.
There are numerous instances where that sort of policy would have blown up large chunks of the net. It's already happened. Part of the problem is that the Tier-1 guys can't buy bigger equipment to hold all the routes, either. When Sprint started this sort of filtering in 206.* I yelled and screamed, thinking it was foolish. History has proven us wrong. Without it, we'd be at route announcement levels which would blow up the available backbone hardware. Plus, without that sort of selective filtering, accidents can kill things right and left. I am not sure whether the danger in opening up the B space for /17 blocks is particularly bad, but lacking a single consistent policy body with sufficient clue about both the Tier-1 backbone issues and the address allocation issues, it's hard to fault any given ISP for insisting on /16s in B space. -george william herbert gherbert@crl.com Disclaimer: I am a CRL end user, not employee, and speak for myself only.
I am not sure whether the danger in opening up the B space for /17 blocks is particularly bad, but lacking a single consistent policy body with sufficient clue about both the Tier-1 backbone issues and the address allocation issues, it's hard to fault any given ISP for insisting on /16s in B space.
Sounds good, but what exactly does that mean? Does any end network capable of justifying a /24 then get a routable chunk, thus blowing up the tables? What if you could do it based upon traffic generation? That would be difficult to verify, and the definition for 'large' amounts of traffic is ever changing. So, if we say that a /20 is a sufficiently large amount of space to get a routable chunk, then they would be able to get it from ARIN anyway, and we're back to square one. In the far term as space becomes scarce we will need to find a solution to wasted B space, but that is several years out. Perhaps by that time routers will have so much memory and CPU as to make an extra ~4 million possible routes negligible. Austin
participants (2)
-
Austin Schutz
-
George Herbert