Crist Clark wrote:
Joe Maimon wrote:
Tony Rall wrote:
On Wednesday, 2003-12-03 at 09:38 PST, David Sinn <dsinn@dsinn.com> wrote:
<snipped>
(And note that frag 1 often is not the first fragment to arrive at downstream nodes. In my example in (1), frequently frag 2 will reach places before frag 1 does (if any router along the path reorders its transmit queue based on packet size).)
I agree with all I have snipped. I was wondering would it not be wiser for fraggers to frag in half instead of just the overflow?
For instance, suppose router has to fragment 1500 byte packet to go over 1476 GRE. Instead of having a big packet/little fragment why not just divide in half? This would give them more equal buffer treatment, but an even bigger potential win is to avoid perhaps a second (maybe ipsec?) fragmenting later on down the pipe.
Once you are going to do it, do it right. It is not as if your decreasing header overhead by producing small fragment packets. And I am assuming the whole packet is already in buffer when it comes time to fragment it.
Programmers are lazy.
Excerise for the reader:
Devise an algorthm that will take an arbitrarily sized packet 20-65535 octets and an arbitrarily sized MTU, > 576 octets, and split the packet into the minimum number of "n" fragments where each fragment is (1) less than the MTU, (2) no two fragments differ by more than 8 octets, and the fragments obey the IP fragmentation rules, (3) data payload must end on an 8-octet boundary for all but the last fragment and (4) each fragment has an exact copy of the original header except for differences in the fragmentation fields and checksum.
Compare to the algorithm of cutting the data in to "m" (mtu - ip_hl)- chunks and putting the leftovers into the final fragment.
I've got to jump in and display my considerable ignorance here. Are there not machines in service now that start blatting bits out (when able) before the whole packet has been recieved? Given that to be correct, it would seem to be Really Hard to whack up a packet into equal-sized chunks (given that it is otherwise a Good Thing To Do) on-the-fly. Lazy programmers (which I have long taught are the Best Kind) will blat out bits until the buffer is full, start a new buffer, rinse, lather, repeat until the input buffer is exhausted. Where did I go into the ditch?