(from earlier randy)
you just assumed that the transitive closure of everybody's cones implement and propagate count. ain't gonna happen.
well, I was thinking that you can survey your customers to know their approximate inbound number, you can implement a max-prefix in from them with that (ideally you're already doing that). You can figure out the output from you as well in a similar fashion. In either case you're not implementing a limit that's 1% larger than the actual number, you're hedging the number for at least operational overhead reasons to 20-40%. Even a large ISP is sending (today) less than 100k prefixes when the peer isn't asking for 'full routes'. So, I'd imagine you bucket your customers as: default only - limit 10 customer prefixes only - limit +30% of your customer routes set full transit - +20% of current full table (yes, you may have more buckets than me, meh) and those are good starting points, if you keep these bucketed you can just ratchet up the limits as time requires. The prefix-limits (in or out) isn't to stop jim-isp from sending 2 of jane-isp's routes, it's to keep jim-isp from making a bad situation very bad. You (ideally!) have prefix-lists to limit jim from sending jane's routes. On Sat, Sep 2, 2017 at 4:16 AM, Job Snijders <job@instituut.net> wrote:
On Sat, Sep 02, 2017 at 04:27:03PM +0900, Randy Bush wrote:
I am not sure what the issue here is. If I can tell my peering partner a recommended maximum prefix value for them to set on their side, surely I can configure that same value on my side as the upper outbound limit.
which is why i do not tell peers a max count.
I think you'll find that some of your peers will make an educated guess and set an inbound limit anyway. Actively requesting that no limit is applied may make one part of a fringe minority.
This is a quick survey of your peers and setting the buckets from above at 'sane' limits, right?
Most networks publish a baseline number via a rendezvous point like PeeringDB, this makes it easy to signal to larger groups what the recommended values are.
this stuff works for small isps, in the lab, ... but not at scale; especially when you have isps as customers. i wish it did.
In this context "small ISPs" may account for the majority of the target audience. It appears there are about 50,000 "origin only" ASNs [1], for the majority of those it'll be straightforward to decide on a sensible max-out value. BGP speaking CDN caching nodes are also low hanging fruit. But even for a network like NTT I can see benefits of a max-out limit in a number of scenarios.
bgp at scale is rather dynamic. i suspect your $dayjob's irr filters being exact help a bit.
Yes, BGP is dynamic, but these days a lot of the topology at the wholesale level has been firmly pinned down through mechanisms like 'peerlock' [2].
Speaking as an ISP for ISPs: NTT/2914 applies an inbound maximum-prefix limit on each and every EBGP session.
you can answer this if you want, or not.. but I'm curious, is this tuned-per-peer? or via some bucket form as I proposed above? I expect ntt COULD per-peer, since I think almost-all-config is auto-generated, but I'd be curious still if you decided at the bucket level instead because it's saner to think about it that way (for me anyway) or just went 'current +N%' for each peer?
Kind regards,
Job
[1]: http://bgp.potaroo.net/cgi-bin/plota?file=%2fvar%2fdata% 2fbgp%2fas2%2e0%2fbgp-as-term%2etxt&descr=Origin%20only% 20ASes&ylabel=Origin%20only%20ASes&with=step [2]: http://instituut.net/~job/peerlock_manual.pdf