I don't think RRD is that bad if you are gonna check only every 5 minutes... Again, perhaps I'm just missing something, but so lets say you measure 30 seconds late , and it thinks its on time -- So that one sample will be higher , then the next one will be on time, so 30 seconds early for that sample -- it will be lower. On the whole -- it will be accurate enough -- no? Besides I think RRD has a bunch of things built in to deal with precisely this problem. I'm not saying a hardware solution can't be better -- but it is likely overkill compared to a few cheap intels running RRD -- assuming your snmpd can deal with the load... --Phil -----Original Message----- From: Doug Clements [mailto:dsclements@linkline.com] Sent: Tuesday, July 23, 2002 1:50 AM To: pr@isprime.com Cc: nanog@merit.edu Subject: Re: PSINet/Cogent Latency ----- Original Message ----- From: "Phil Rosenthal" <pr@isprime.com> Subject: RE: PSINet/Cogent Latency
Call me crazy -- but what's wrong with setting up RRDtool with a heartbeat time of 30 seconds, and putting in cron: * * * * * rrdscript.sh ; sleep 30s ; rrdscript.sh
Wouldn't work just as well?
I haven't tried it -- so perhaps this is too taxing (probably you would only run this on a few interfaces anyway)...
Redback's implementation overcame the limitation of monitoring say, 20,000 user circuits. You don't want to poll 20,000 interfaces for maybe 4 counters each, every 5 minutes. I think the problem with using rrdtool for billing purposes as described is that data can (and does) get lost. If your poller is a few cycles late, the burstable bandwidth measured goes up when the poller catches up to the interface counters. More bursting is bad for %ile (or good if you're selling it), and the customer won't like the fact that they're getting charged for artifically high measurements. Bulkstats lets the measurement happen independant of the reporting. --Doug