On 12/16/18 12:07 AM, Saku Ytti wrote:
On Sun, 16 Dec 2018 at 00:48, Stephen Satchell <list@satchell.net> wrote:
The 1500 bits are for each ping. So 1000 hosts would be 1,500,000 bits
Why? Why did you choose 1500b(it) ping, instead of minimum size or 1500B(ytes) IP packets?
Minimum: 672kbps 1500B: 12.16Mbps
I was going from memory, and it is by no means perfect. But... A standard ping packet, with no IP options or additional payload, is 64 bytes or 512 bits. If an application wants to make an accurate round-trip-delay measurement, it can insert the output of a microsecond clock, and compare that value for when the answer packet comes back. Add at least 32 bits, perhaps 64. Even with this sensible amount of extra ping payload, there is still plenty of "bandwidth allocation" available to account for encapsulations: IPIP, VPN, MPLS, Ethernet framing, ATM framing, &c. I can see a network operator with a complex mesh network wanting to turn on Record Route (RFC791), which adds 24+(hops*32, max 536) bits to both ping and ping-response packets. So my 1500 bits for ping was not bad Tennessee windage for the application described by the original poster, plus comments added by others. In fact, it would overestimate the bandwidth required, but not by that much. As for how much the use of ping would affect the CPU loading of the device, that would depend a great deal on the implementation of the TCP/IP stack in the CPE. When I wrote _Linux IP Stacks Commentary_, the code to implement ping is packet receipt, a very small block of code to build the reply packet, and packet send the above mentioned 64 bytes of packet. Consider another service: Network Time Protocol. Unlike ping, there is quite a bit of CPU load to process the time information through the smoothing filters. (Counter argument: a properly implemented version of NTP will send time requests with separations of 60-1024 seconds.)