Steven M. Bellovin wrote:
On Thu, 12 Apr 2007 11:20:18 +0200 Iljitsch van Beijnum <iljitsch@muada.com> wrote:
Dear NANOGers,
It irks me that today, the effective MTU of the internet is 1500 bytes, while more and more equipment can handle bigger packets.
What do you guys think about a mechanism that allows hosts and routers on a subnet to automatically discover the MTU they can use towards other systems on the same subnet, so that:
1. It's no longer necessary to limit the subnet MTU to that of the least capable system
2. It's no longer necessary to manage 1500 byte+ MTUs manually
Any additional issues that such a mechanism would have to address?
Last I heard, the IEEE won't go along, and they're the ones who standardize 802.3.
A few years ago, the IETF was considering various jumbogram options. As best I recall, that was the official response from the relevant IEEE folks: "no". They're concerned with backward compatibility.
Perhaps that has changed (and I certainly) don't remember who sent that note.
No, I doubt it will change. The CRC algorithm used in Ethernet is already strained by the 1500-byte-plus payload size. 802.3 won't extend to any larger size without running a significant risk of the CRC algorithm failing. From a practical side, the cost of developing, qualifying, and selling new chipsets to handle jumbo packets would jack up the cost of inside equipment. What is the payback? How much money do you save going to jumbo packets? Show me the numbers.