
On Thu, Apr 12, 2007 at 11:34:43AM -0400, Keegan.Holley@sungard.com wrote:
I think it's a great idea operationally, less work for the routers and more efficient use of bandwidth. It would also be useful to devise some way to at least partially reassemble fragmented frames at links capable of large MTU's.
I think you underestimate the memory and cpu required on large links to be able to buffer the data that would allow a reassembly by an intermediate router
Since most PC's are on a subnet with a MTU of 1500 (or 1519) packets would still be limited to 1500B or fragmented before they reach the higher speed links. The problem with bringing this to fruition in the internet is going to be cost and effort. The ATT's and Verizons of the world are going to see this as a major upgrade without much benefit or profit. The Cisco's and Junipers are going to say the same thing when they have to write this into their code plus interoperability with other vendors implementations of it.
I dont think any of the above will throw out any particular objection.. I think your problem is in figuring out a way to implement this globally and not break stuff which relies so heavily upon 1500 bytes much of which does not even cater for the possibility another MTU might be possible. Steve
Iljitsch van Beijnum <iljitsch@muada.com> Sent by: owner-nanog@merit.edu
04/12/2007 05:20 AM
To
NANOG list <nanog@merit.edu>
cc
Subject
Thoughts on increasing MTUs on the internet
Dear NANOGers, It irks me that today, the effective MTU of the internet is 1500 bytes, while more and more equipment can handle bigger packets. What do you guys think about a mechanism that allows hosts and routers on a subnet to automatically discover the MTU they can use towards other systems on the same subnet, so that: 1. It's no longer necessary to limit the subnet MTU to that of the least capable system 2. It's no longer necessary to manage 1500 byte+ MTUs manually Any additional issues that such a mechanism would have to address?