On Mon, May 07, 2001 at 10:45:18AM -0400, Thomas Gainer wrote:
Does anyone know of any good papers that explain memory resource utilization during the BGP table construction.
Post Note: Yes, I have read the RFC. I am looking for additional resources.
For traditional implementations of BGP. as described by the rfc (btw the rfc is pretty loaded with mistakes, draft-ietf-idr-bgp4-12 is a bit better though not perfect itself), I think the word you're looking for is "lots". Memory requirements will theoretically differ quite a bit between implementations, and depending on the routes you receive. For example every half-decent implementation I'm aware of uses an attriute hash to reduce memory requirements where multiple prefixes share common attributes. But if you receive routes with all unique attributes (for example by randomizing the med value or somesuch), you'd burn through memory a LOT quicker than normal (sh ip bgp sum and see where your memory is going :P). Traditional RIBs using patricia trees are also quite bad for BGP, in addition to burning memory they they are extremely inefficient for the actual BGP usage of insertions and deletions with a known prefix length. I'd say that 95% of the fault of slow converging and memory sucking BGP can rest squarely on the shoulders of bad implementations. If there are any papers on BGP memory usage and performance of the actual algorithms used, I'd love to see one. I did a quick check on citeseer and didn't turn up much of interest. The only things notable were the following, still not very useful: http://citeseer.nj.nec.com/422681.html http://citeseer.nj.nec.com/333036.html -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras PGP Key ID: 0x138EA177 (67 29 D7 BC E8 18 3E DA B2 46 B3 D8 14 36 FE B6)