Sean Donelan is rumoured to have written: * Are engineers keeping their managers' in the dark. Does management * not know there is a potential solution to the problem. Or does * their management really think its Ok their customers are at risk * of losing service at any time due to unfiltered routes. When you * speak with your Cisco sales rep, do you tell them one of the requirements * is being able to filter the entire route table with multiple peers. and Jared Mauch is rumoured to have written: * I think that some of the problem is that not all of the managers * are aware of all the risks related to this, because they have not seen * or heard of any problems related to not using a routing registry. * * I suspect in cases where the engineers don't have the ability to * create policies such as their own registries, or configure off of an * existing IRR, or don't have the time to deal with supporting, or * configuration of all the routers off of the tools. (i mean both * internal support and external support). ok. where to start? 1. last i checked, the biggest problems in BGP for vendors was dealing with flapping, the explosion of multiple paths, and the ever-increasing number of peers per box. filtering prefixes, technology-wise, wasn't an issue. [i guess there was a problem with REALLY huge configs a few years ago, but that wasn't a BGP prefix filtering problem, per se] 2. most large isps aren't going to be happy about config'ing their routers off of another provider's registry. so, every provider would in theory need to run their own registry. problems: a. the independant registries generally store a local copy of the other registries for config use. this was fine when there were a few [like ripe, mci, ans] but not so fine when N grows unbounded. co-ordination could very quickly become a nightmare. b. who "owns" the local registry/config generator in the ISP? [noc, install IS, neteng, security?] when its interfacing with customer databases, responsible for config'ing the entire network, affects peering, and is a publically accessible server, everyone is gonna want a piece of it. bureaucracy is great for insuring nothing gets done. thats assuming the resources [money, people, time] are available to create & maintain this database server anyway. c. uh, who is responsible for "bad" data? remember the 0.0.0.0/0 object? [followup: why was it put in?] how is this any different than just announcing bad data? is someone going to verify by hand all of the data registered? since there is no authoritative 1 to 1 mapping between ownership and routing, how can it truly be verified? 3. given variances in systems, theres going to be variances in propgation. remember when ans only updated their router filters twice a week? but mci was updating once a day? will your customers hold _you_ responsible if joebob isp decides to only update once a week? ---------- its much easier to attack this problem at the edge, as was pointed out while i was composing this. verifying and changing filters per customer is really much better than verifying and changing filters *per peer*. [per week, day, hour, what?] build in safegaurds in the peering agreements, if you must. at some point, i believe it is cheaper [in terms of resources, politics, and network flexibility] to trust peers to the extent one can [in either preventing or resolving issues]. and if you can't trust 'em at all? maybe one shouldn't peer with them. oh, wait. that's another can of worms. but hey, the net changes so fast, maybe they'll be bought, sold, or vanish next week. ymmv. _k