On Sun, 16 Dec 2018 at 17:59, Stephen Satchell <list@satchell.net> wrote:
A standard ping packet, with no IP options or additional payload, is 64 bytes or 512 bits. If an application wants to make an accurate round-trip-delay measurement, it can insert the output of a microsecond clock, and compare that value for when the answer packet comes back. Add at least 32 bits, perhaps 64.
Unsure about standard, but Linux iputils ping does this: ╰─ ping -4 ftp.funet.fi PING ftp.funet.fi (193.166.3.2) 56(84) bytes of data. 64 bytes from ftp.funet.fi (193.166.3.2): icmp_seq=1 ttl=243 time=47.8 ms This means: 20B IPv4 Header 08B ICMP Header 56B ICMP data ------------------ 84B IPv4 packet Add to that EthernetII encapsulation, and you have: 122B or 976bits or 976kbps for 1k hosts. Vast majority of that ICMP data is unnecessary trash, if you use minimum size EthernetII payload you have 18bytes ICMP data, _free of charge_, which is plenty to add timestamping and what have you, without increasing link utilisation. i.e. 672kbps for 1k hosts will allow you to send 18B of arbitrary data, and there is no way to use less bps.
I can see a network operator with a complex mesh network wanting to turn on Record Route (RFC791), which adds 24+(hops*32, max 536) bits to both ping and ping-response packets.
Be careful about what you intend to measure. I would try to measure customer experience as much as possible, IP options are punted for software processing and may forward differently and will forward with several orders of magnitude higher jitter and will experience larger packet loss. -- ++ytti