On Jul 22, 2016, at 1:37 PM, Grzegorz Janoszka <Grzegorz@Janoszka.pl> wrote: What I noticed a few years ago was that BGP convergence time was faster with higher MTU. Full BGP table load took twice less time on MTU 9192 than on 1500. Of course BGP has to be allowed to use higher MTU.
Anyone else observed something similar?
I have read about others experiencing this, and did some testing a few months back -- my experience was that for low latency links, there was a measurable but not huge difference. For high latency links, with Juniper anyway, there was a very negligible difference, because the TCP Window size is hard-coded at something small (16384?), so that ends up being the limit more than the tcp slow-start issues that MTU helps with. With that said, we run MTU at >9000 on all of our transit links, and all of our internal links, with no problems. Make sure to do testing to send pings with do-not-fragment at the maximum size configured, and without do-not-fragment just slightly larger than the maximum size configured, to make sure that there are no mismatches on configuration due to vendor differences. Best Regards, -Phil Rosenthal