Thus spake Christian Meutes (christian@errxtx.net) on Fri, Dec 21, 2018 at 02:41:23PM +0100:
Depending on your requirements and scale - but I read you want history - it's probably less a demand on CPU or network resources, but more on IOPS.
If you cache all results before writing to disk, then it's not much of a problem, but by just going "let's use RRD/MRTG for this" your IOPS could become the first problem. So you might look into a proper timeseries backend or use a caching daemon for RRD.
Having once written a caching daemon for mrtg/rrdtool, the advent of SSD arrays has made iops largely irrelevant. (I had ~ 1.2M targets in mrtg on that machine) Dale
On Sat, Dec 15, 2018 at 4:48 PM Colton Conor <colton.conor@gmail.com> wrote:
How much compute and network resources does it take for a NMS to:
1. ICMP ping a device every second 2. Record these results. 3. Report an alarm after so many seconds of missed pings.
We are looking for a system to in near real-time monitor if an end customers router is up or down. SNMP I assume would be too resource intensive, so ICMP pings seem like the only logical solution.
The question is once a second pings too polling on an NMS and a consumer grade router? Does it take much network bandwidth and CPU resources from both the NMS and CPE side?
Lets say this is for a 1,000 customer ISP.
-- Christian Meutes
e-mail/xmpp: christian@errxtx.net mobile: +49 176 32370305 PGP Fingerprint: B458 E4D6 7173 A8C4 9C75315B 709C 295B FA53 2318 Toulouser Allee 21, 40211 Duesseldorf, Germany