On Thu, Aug 13, 2009 at 12:03:24PM -0400, Joe Provo wrote:
Experience proves otherwise. L3's filtergen is a great counter-example, where the customer-specific import policy dictates sources to believe regardless of what other stuff is in their local mirror. It happily drops prefixes not matching, so it does "make a real difference" WRT
No, level3's filtergen follows the exact "search path" rules I described previously, which has no real impact for any reasonably sized isp. For example, say I describe my as-set and aut-num and routes in altdb, you can have level3 restrict the scope of the initial query to ALTDB::myasset, and you can have level3 restrict the search path to altdb, but what happens when I have a customer in my as-set who registered their prefixes in radb? Now you have to open up the scope there, and ripe, and arin, and level3, and ntt, and savvis, etc. Now say someone comes along and slips an unauthorized route with my origin: aut-num into one of these databases. You have no way to prevent that from happening, when you run the query on my as-set/aut-num you're going to get back the superset of my legit routes + any bogus ones. And this is a good thing, because it's a lot less destructive to have bogus routes in the system than it is to give someone the ability to override legitimate routes with a bogus entry on a "more trusted" db.
customer filtering. I'm not familiar with DTAG's tools, but would be shocked if they were less featureful. For querying other databases, see IRRd's !s syntax, which specifies sources in order. Also see Net:IRR's $whois->sources() method. For tools based on these, I would presume it be up to your implementation or policy expression as to how you decide the handling on multiple matches. When mentioned, usually the first which matches is specified as 'the one', which is why search order matters. What other purpose does specifying a search order serve?
This is the server side search path I talked about, it has nothing to do with any specific client implementation nor is a client implementation practical. See page 34 of: http://www.irrd.net/irrd-user.pdf Again you can restrict a global query, but this provides very little practical benefit. You could dynamically restrict sources per query when you go to do the !i or !g expansion, but there is no information on what you should restrict it to, so again no practical benefit. The only thing Level3 adds that isn't part of the stock query syntax is the top level scope I mentioned above, ALTDB::AS-MYASSET. To support this recursively you would have to run multiple full queries for the full records without server side expansion, which is not practical for anyone with more than a few hundred routes.
If I am running a tool to generate filter lists for my customers, I want to believe my RR, the local RIR, some other RIR that is well run, and then maybe my upstream. Specify that search order and believe the first match. Job done. If you have highly clued downstreams, go the filtergen route and tune source-believability based on customer, or cook up another avenue. There is nothing inherent in the system to prevent this. ...
Yes, reduced queries and the ability to ignore Bob's Bait and Tackle DB if I know it is part of the "piles of junk databases" you posit will exist. See above.
This doesn't work if your customers have customers. I'm also not aware of anyone running any "bad" databases, or for that matter any databases which are of lesser security/quality than the "big boys". Short of what ripe implements because they are the RIR, there is no real security on registrations here, so it doesn't much matter if the database is level3 or bob's bait and tackle. And even given what I consider to be an excessively large list of irr databases today, from the standpoint of keeping good records, I'd be hard pressed to name one on the list who's data I should trust any less than say level3's.
Centralized scales better than distributed? Quick, call the 80s - we need HOSTS.TXT back.
A silly argument. In this case, hosts.txt is equivalent to an ISP having a human manually process an e-mail from a customer, add it to a prefix-list on a router, and then manually e-mail their upstream or peer ISPs to have them update the prefix-list, etc. In many cases centralized (or at least, restricted to some reasonably sized set, obviously nobody is proposing running the entire Internet on a single server run by a single entity) has much better security properties. As far as scale goes, you're talking about a pretty simple database of pretty simple objects here. There is probably more overhead that goes into maintaining the distributed nature of the db than there is actual work generating prefix-lists. :)
There is no reason that this process needs to be politicized, or cost anyone any money to use.
Anytime you go down the road of advocating authority centralization, you'll start getting people politicizing the process [cf icann, alternate roots, all the random frothing-at-the-mouth-until-they-fall-over types]. I rather think that can be avoided by properly embracing the distributed databases that do indeed function. Some can be side-stepped with RIR- based IRRs, and decently distributed down to LIRs, but we all know ARIN is still playing catchup here so it doesn't help our sphere in the near term.
A reasonable amount of authority centralization in this case is at the RIR level, particularly if it adds security mechanisms that provide some level of authorization over who is registering what prefixes. There is no reason that I should need to run my own database, if the system was designed properly.
Again, we've made a horrible system here.
I think you've misspelled 'front end'. The system certainly seems to function, and the entire intent was that SPs would build their own customer-facing tools as well as internal tools. Seems we've fallen down in that regard, but irrpt [even if in php :-P] and the revival of IRRtoolset are indications that folks are still interested in building the internal widgets. In general I think you'd agree that the 'back end' of most all service providers did not keep pace with the growth of commoditization of service.
The databases are full of garbage data, a large portion of the networks who do use it have as-set objects which expand to be completely worthless (either blocking all their customers, or allowing the entire internet), and there is a significant percentage of the bgp speaking Internet who can't figure it out at all (including a lot of otherwise theoretically competent tier 1's). Even of the people who use it, a LOT of it only works because of wide-spread and often unauthorized proxy registration, which IMHO is even more evil than not having it at all. I'm by no means advocating the hosts.txt approach, clearly we NEED a scalable automated system for managing authorized prefixes, but by every measurable standard I can come up with the end result is a festering pile of crap. I really don't think you can completely dismiss the back end (both implementation and design) as part of those problems. -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)