Re: using IRR tools for BGP route filtering
![](https://secure.gravatar.com/avatar/8d7d1cc41c0b82eb0952821479fb51f9.jpg?s=120&d=mm&r=g)
I believe every major backbone has suffered a multi-hour service disruption due to another provider announcing blackhole routes. The most recent one was Sprint a couple of weeks ago when another major provider re-announced part of their network in Chicago. Its not just a risk from "small" providers like 7007. Most of the widely distributed bogus announcements pass through large providers like Spring and UUNET. Most bogus announcements only affect a single network customer, like the FCC web site, so some people just assume its usual Internet flakiness when they can't reach a network. Its strange to see carriers whose management wouldn't think of ignoring the LERG, believe its ok to risk extended service disruptions by announcing and listening to unfiltered, unauthenticated routing information. Are engineers keeping their managers' in the dark. Does management not know there is a potential solution to the problem. Or does their management really think its Ok their customers are at risk of losing service at any time due to unfiltered routes. When you speak with your Cisco sales rep, do you tell them one of the requirements is being able to filter the entire route table with multiple peers.
![](https://secure.gravatar.com/avatar/f711548a6c16644aa1a3f64d78a26498.jpg?s=120&d=mm&r=g)
On Tue, Jun 20, 2000 at 01:29:04AM -0700, Sean Donelan wrote:
Are engineers keeping their managers' in the dark. Does management not know there is a potential solution to the problem. Or does their management really think its Ok their customers are at risk of losing service at any time due to unfiltered routes. When you speak with your Cisco sales rep, do you tell them one of the requirements is being able to filter the entire route table with multiple peers.
I think that some of the problem is that not all of the managers are aware of all the risks related to this, because they have not seen or heard of any problems related to not using a routing registry. Imagine the meeting that gte.net had after their domain name was transferred to some other dns servers (or so i heard). I suspect that a number of them are either aware of the amount of trust that folks have for one and another, otherwise they are just totally oblivious to the fact that there are so many ways that there can be problems. I suspect in cases where the engineers don't have the ability to create policies such as their own registries, or configure off of an existing IRR, or don't have the time to deal with supporting, or configuration of all the routers off of the tools. (i mean both internal support and external support). - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine. END OF LINE |
![](https://secure.gravatar.com/avatar/5a42e6028e8bb86507db584e26c73136.jpg?s=120&d=mm&r=g)
Its strange to see carriers whose management wouldn't think of ignoring the LERG, believe its ok to risk extended service disruptions by announcing and listening to unfiltered, unauthenticated routing information.
Are engineers keeping their managers' in the dark. Does management not know there is a potential solution to the problem.
enlighten us. what solution? randy
![](https://secure.gravatar.com/avatar/0ee1046bdb1fd6065f26a9dfd248b8ef.jpg?s=120&d=mm&r=g)
Sean Donelan is rumoured to have written: * Are engineers keeping their managers' in the dark. Does management * not know there is a potential solution to the problem. Or does * their management really think its Ok their customers are at risk * of losing service at any time due to unfiltered routes. When you * speak with your Cisco sales rep, do you tell them one of the requirements * is being able to filter the entire route table with multiple peers. and Jared Mauch is rumoured to have written: * I think that some of the problem is that not all of the managers * are aware of all the risks related to this, because they have not seen * or heard of any problems related to not using a routing registry. * * I suspect in cases where the engineers don't have the ability to * create policies such as their own registries, or configure off of an * existing IRR, or don't have the time to deal with supporting, or * configuration of all the routers off of the tools. (i mean both * internal support and external support). ok. where to start? 1. last i checked, the biggest problems in BGP for vendors was dealing with flapping, the explosion of multiple paths, and the ever-increasing number of peers per box. filtering prefixes, technology-wise, wasn't an issue. [i guess there was a problem with REALLY huge configs a few years ago, but that wasn't a BGP prefix filtering problem, per se] 2. most large isps aren't going to be happy about config'ing their routers off of another provider's registry. so, every provider would in theory need to run their own registry. problems: a. the independant registries generally store a local copy of the other registries for config use. this was fine when there were a few [like ripe, mci, ans] but not so fine when N grows unbounded. co-ordination could very quickly become a nightmare. b. who "owns" the local registry/config generator in the ISP? [noc, install IS, neteng, security?] when its interfacing with customer databases, responsible for config'ing the entire network, affects peering, and is a publically accessible server, everyone is gonna want a piece of it. bureaucracy is great for insuring nothing gets done. thats assuming the resources [money, people, time] are available to create & maintain this database server anyway. c. uh, who is responsible for "bad" data? remember the 0.0.0.0/0 object? [followup: why was it put in?] how is this any different than just announcing bad data? is someone going to verify by hand all of the data registered? since there is no authoritative 1 to 1 mapping between ownership and routing, how can it truly be verified? 3. given variances in systems, theres going to be variances in propgation. remember when ans only updated their router filters twice a week? but mci was updating once a day? will your customers hold _you_ responsible if joebob isp decides to only update once a week? ---------- its much easier to attack this problem at the edge, as was pointed out while i was composing this. verifying and changing filters per customer is really much better than verifying and changing filters *per peer*. [per week, day, hour, what?] build in safegaurds in the peering agreements, if you must. at some point, i believe it is cheaper [in terms of resources, politics, and network flexibility] to trust peers to the extent one can [in either preventing or resolving issues]. and if you can't trust 'em at all? maybe one shouldn't peer with them. oh, wait. that's another can of worms. but hey, the net changes so fast, maybe they'll be bought, sold, or vanish next week. ymmv. _k
![](https://secure.gravatar.com/avatar/5a42e6028e8bb86507db584e26c73136.jpg?s=120&d=mm&r=g)
the problem is that routers will not run acls of the size needed to filter large peers were they to register. so why should i whine at them to register. there used to be a provider which configured their routers to statically route based on the registry. that provider is gone. randy
![](https://secure.gravatar.com/avatar/9b3dca2219a0466f493757a439e859a3.jpg?s=120&d=mm&r=g)
Randy Bush is rumoured to have written: * the problem is that routers will not run acls of the size needed to filter * large peers were they to register. so why should i whine at them to * register. whine at who? the large peers? they should register so that when one notices something odd, one can query to see what it _should_ be. they should register, and maintain their registrations, so that in an ideal world, their announcements would match what they have registered, and be nicely aggreagted, too! i emphatically DO NOT think that large providers should filter other peers. i think the large providers should filter their own announcements, by carefully verifying what a downstream wishes to announce before accepting it, filtering the customer announcements, and aggregating their announcements to peers. i think its silly to try and regulate the world from ones own corner. regulate your corner, and encourage others to do the same. i don't care if said encouragement is by tacit agreememnt, or bound up in legealese in peering agreements. * there used to be a provider which configured their routers to statically * route based on the registry. that provider is gone. a lot of things are gone. but then, was it really their routing policies that killed them, in the end? _k
![](https://secure.gravatar.com/avatar/3f08a30fea901f8f84cf00ca401b992b.jpg?s=120&d=mm&r=g)
On Wed, Jun 21, 2000 at 03:29:19PM +0100, Randy Bush wrote:
the problem is that routers will not run acls of the size needed to filter large peers were they to register. so why should i whine at them to register.
AS Path filtering would be much smaller than prefix based filtering. Perhaps small enough to work on existing routers. These filters could also be built using registry data, from the ASes that make up providers' as-macros. Not a silver bullet, but probably significantly better than doing nothing. Austin
![](https://secure.gravatar.com/avatar/7ae5fb6e0b5212fa4afaadbe221e3540.jpg?s=120&d=mm&r=g)
2. most large isps aren't going to be happy about config'ing their routers off of another provider's registry. so, every provider would in theory need to run their own registry.
A fine idea. If I may be permitted to flog a dead horse, the "right" way is to encode RPSL data in the in-addr.arpa or ip6.int tree.
problems:
a. the independant registries generally store a local copy of the other registries for config use. this was fine when there were a few [like ripe, mci, ans] but not so fine when N grows unbounded. co-ordination could very quickly become a nightmare.
Not a real problem. You manage the data in the prefixes that are delegated to you. Coordination is managed as the DNS is managed.
b. who "owns" the local registry/config generator in the ISP? [noc, install IS, neteng, security?] when its interfacing with customer databases, responsible for config'ing the entire network, affects peering, and is a publically accessible server, everyone is gonna want a piece of it. bureaucracy is great for insuring nothing gets done. thats assuming the resources [money, people, time] are available to create & maintain this database server anyway.
The folks doing the DNS. Coordinating w/ the other interested parties in the company.
c. uh, who is responsible for "bad" data? remember the 0.0.0.0/0 object? [followup: why was it put in?] how is this any different than just announcing bad data? is someone going to verify by hand all of the data registered? since there is no authoritative 1 to 1 mapping between ownership and routing, how can it truly be verified?
Bad data should only occur if you intentionally spoof the DNS. "Interesting" data in the existing IRR came from a small handful of engineers who wanted to point out weaknesses in the RA policies. If you place the RPSL constructs in the delegation tree, its much easier to track the mapping of delegation & announcement. (There is no ownership here)
3. given variances in systems, theres going to be variances in propgation. remember when ans only updated their router filters twice a week? but mci was updating once a day? will your customers hold _you_ responsible if joebob isp decides to only update once a week?
----------
its much easier to attack this problem at the edge, as was pointed out while i was composing this. verifying and changing filters per customer is really much better than verifying and changing filters *per peer*. [per week, day, hour, what?]
build in safegaurds in the peering agreements, if you must. at some point, i believe it is cheaper [in terms of resources, politics, and network flexibility] to trust peers to the extent one can [in either preventing or resolving issues].
and if you can't trust 'em at all? maybe one shouldn't peer with them. oh, wait. that's another can of worms. but hey, the net changes so fast, maybe they'll be bought, sold, or vanish next week.
ymmv.
_k
![](https://secure.gravatar.com/avatar/9b3dca2219a0466f493757a439e859a3.jpg?s=120&d=mm&r=g)
bmanning@vacation.karoshi.com is rumoured to have written: * A fine idea. If I may be permitted to flog a dead horse, * the "right" way is to encode RPSL data in the in-addr.arpa or * ip6.int tree. flog away. it isn't entirely apropos, tho- sean wrote: * Are engineers keeping their managers' in the dark. Does management * not know there is a potential solution to the problem. imho, a dead horse doesn't really fall into the category of potential solutions to the immediate problem. i was addressing the problems i saw with the resources currently at hand- those resources presumably being what sean was considering when he asked if engineers were keeping their managers in the dark regarding potential solutions. of course, it could all be a conspiracy. but i haven't recieved my conspirators newsletter yet, if so. _k
![](https://secure.gravatar.com/avatar/5d49a57c32aa1a12d16a61342f889ee7.jpg?s=120&d=mm&r=g)
jhsu@rathe.mur.com (jhsu@rathe.mur.com) on June 21:
problems:
a. the independant registries generally store a local copy of the other registries for config use. this was fine when there were a few [like ripe, mci, ans] but not so fine when N grows unbounded. co-ordination could very quickly become a nightmare.
This problem is actually solved. Please see RFC 2769. There is one implementation of this already, in ISI's BIRD server. IRRd folks, the defacto routing registry server, are also working to implement this as well. I heard rumors that RIPE will also implement this spec. The protocol lets you auto discover new registries, and of course you can choose what to do with the newly discovered registries. For example, you can establish a registry exchange peering with radb, and set auto discover, and each time radb or recursively some one they exchange with starts peering with a new repository, you can start receiving their data as well. Cengiz -- Cengiz Alaettinoglu Information Sciences Institute http://www.isi.edu/~cengiz University of Southern California
![](https://secure.gravatar.com/avatar/3f08a30fea901f8f84cf00ca401b992b.jpg?s=120&d=mm&r=g)
On Wed, Jun 21, 2000 at 10:25:27AM -0700, cengiz@isi.edu wrote:
jhsu@rathe.mur.com (jhsu@rathe.mur.com) on June 21:
problems:
a. the independant registries generally store a local copy of the other registries for config use. this was fine when there were a few [like ripe, mci, ans] but not so fine when N grows unbounded. co-ordination could very quickly become a nightmare.
This problem is actually solved. Please see RFC 2769. There is one implementation of this already, in ISI's BIRD server. IRRd folks, the defacto routing registry server, are also working to implement this as well. I heard rumors that RIPE will also implement this spec.
But they are reinventing the wheel. Why not use the preexisting functionality built in to the rwhois or dns protocols? The querying mechanism for the IRRd is not RFC documented and returns 'F' when errors occur. The source code is by and large sparsely documented, and in the case of the RAToolSet, it won't compile cleanly on most platforms.
The protocol lets you auto discover new registries, and of course you can choose what to do with the newly discovered registries. For example, you can establish a registry exchange peering with radb, and set auto discover, and each time radb or recursively some one they exchange with starts peering with a new repository, you can start receiving their data as well.
But I don't _want_ everyone's data crammed in my database. I want a referral from a central database that points me to one of several locations for authoritative data. Do I want to cache that data? Maybe. Maybe not. It would not be (as) difficult to define an rwhois schema that has the desired functionality plus a protocol extension or two to account for any extended behavior, such as caching. Please don't take this as bashing on the authors of the aforementioned tools. I realize that they have been working hard and diligently. Austin
![](https://secure.gravatar.com/avatar/1bdcce0e5be0479b1ca28b951f2c87a2.jpg?s=120&d=mm&r=g)
On Wed, Jun 21, 2000 at 10:25:27AM -0700, cengiz@isi.edu wrote:
jhsu@rathe.mur.com (jhsu@rathe.mur.com) on June 21:
problems:
a. the independant registries generally store a local copy of the other registries for config use. this was fine when there were a few [like ripe, mci, ans] but not so fine when N grows unbounded. co-ordination could very quickly become a nightmare.
This problem is actually solved. Please see RFC 2769. There is one implementation of this already, in ISI's BIRD server. IRRd folks, the defacto routing registry server, are also working to implement this as well. I heard rumors that RIPE will also implement this spec.
But they are reinventing the wheel. Why not use the preexisting functionality built in to the rwhois or dns protocols?
Thoses protocol could certainly have been utilized and the community did not ignore them. In fact the pro's and con's were vigoursly debated at NANOG and IETF. In the end the community decided the best way to go was RFC 2769.
The querying mechanism for the IRRd is not RFC documented and returns 'F' when errors occur. The source code is by and large sparsely documented, and in the case of the RAToolSet, it won't compile cleanly on most platforms.
Just in case you were not aware, we do have an on-line IRRd users manual at www.irrd.net. We do our best to keep the manual updated and it does provide detailed information for those who wish to use it. IRRd started out as a machine interface so was born the 'F', 'D', 'C', etc.. return codes. You will find a section in the manual which explains the return codes. The ripewhois interface (designed for humans, mostly) does return english sentences. I agree with you that the internal source code documentation is sparse. To address this issue (and others) we are currrently conducting a full code review and clean up which should help to address this problem.
The protocol lets you auto discover new registries, and of course you can choose what to do with the newly discovered registries. For example, you can establish a registry exchange peering with radb, and set auto discover, and each time radb or recursively some one they exchange with starts peering with a new repository, you can start receiving their data as well.
But I don't _want_ everyone's data crammed in my database. I want a referral from a central database that points me to one of several locations for authoritative data. Do I want to cache that data? Maybe. Maybe not.
Merit is already taking steps to help coordinate the distributed DB effort. We have a listing of remote registries with associated information at http://www.radb.net/docs/list.html. And when we impletment RFC 2726 we will have 'repository' objects corresponding to each remote DB that we mirror. This information will give the community a DB source to discover new DB's and to utilize them (or not) as they wish. We view our effort as a first step in assisting the community going forward. It remains to be seen what will end up being the best method to provide this information.
It would not be (as) difficult to define an rwhois schema that has the desired functionality plus a protocol extension or two to account for any extended behavior, such as caching. Please don't take this as bashing on the authors of the aforementioned tools. I realize that they have been working hard and diligently.
Thank you for the kind remark. --jerry
Austin
participants (9)
-
Austin Schutz
-
bmanning@vacation.karoshi.com
-
cengiz@isi.edu
-
gerald@merit.edu
-
Jared Mauch
-
jhsu@mur.com
-
jhsu@rathe.mur.com
-
Randy Bush
-
Sean Donelan