Hi all, I'm reaching the point where adding in a new piece of infrastructure hardware, connecting up a new cable, and/or assigning address space to a client is nearly 50% documentation and 50% technical. One thing that would take a major load off would be if my MRTG system could simply update its config/index files for itself, instead of me having to do it on each and every port change. Can anyone offer up ideas on how you manage any automation in this regard for their infrastructure gear traffic graphs? (Commercial options welcome, off-list, but we're as small as our budget is). Unless something else is out there that I've missed, I'm seriously considering writing up a module in Perl to put up on the CPAN that can scan my RANCID logs (and perhaps the devices directly for someone who doesn't use RANCID), send an aggregate 'are these changes authorized' email to an engineer, and then proceed to execute the proper commands within the proper MRTG directories if the engineer approves. I use a mix of Cisco/FreeBSD&Quagga for routers, and Cisco/HP for switches, so it is not as simple as throwing a single command at all configs. All feedback welcome, especially if you are in the same boat. My IP address documentation/DNS is far more important than my traffic stats, but it really hurts when you've forgotten about a port three months ago that you need to know about now. Steve
I'm reaching the point where adding in a new piece of infrastructure hardware, connecting up a new cable, and/or assigning address space to a client is nearly 50% documentation and 50% technical.
A common problem :)
One thing that would take a major load off would be if my MRTG system could simply update its config/index files for itself, instead of me having to do it on each and every port change.
Can anyone offer up ideas on how you manage any automation in this regard for their infrastructure gear traffic graphs? (Commercial options welcome, off-list, but we're as small as our budget is).
Not sure how you're doing your graphs currently, but have you considered Cacti?
I use a mix of Cisco/FreeBSD&Quagga for routers, and Cisco/HP for switches, so it is not as simple as throwing a single command at all configs.
All feedback welcome, especially if you are in the same boat. My IP address documentation/DNS is far more important than my traffic stats, but it really hurts when you've forgotten about a port three months ago that you need to know about now.
First, I'll throw out a bit of what we do and it might give some ideas, though not necessarily good ones. We use Linux/Quagga routers, in-house-modified Linux-based LNSs, and HP switches. Some of our configuration and change management is done via cfengine, backed by subversion. The latter yields the added benefit of revision control and all the other good stuff you can get from svn in such a scenario. Unfortunately this is only part of the config/graphs/docs/DNS/IPs/OSS equation and we don't have everything fully integrated yet (nor is there a business case for it at the moment). Some of our OSS is based on a heavily in-house modified version of Freeside as well as our own app/layer that sits on top. This is essentially our base system which allows us to push data and prov services to other internal and external systems - e.g. DNS, IP assignment, vendor's portals/APIs, RADIUS, etc. (basically almost every piece of hardware and software we have) and which interfaces with our self-service (customer portal - aka the almighty call-avoidance "solution"). We also use IPPlan for managing IP assignment, but are moving away from it. In a perfect world, everything would be integrated with everything else, searching by every data element would be possible, every business process would be automated, all of your docs would be in one place, all linked to the network element / customer / ticket / order / whatever, and so on. For most organizations, this is neither feasible nor required. Each system tends to do one or two things well and you have much unavoidable data duplication and data moving back and forth. Usually the goal is to minimize the amount of manual data entry down to a single time and to push this aspect out towards the customer as much as possible. The extent of that will depend on your specific environment - everyone basically does the same thing, so often there's no need to re-invent the wheel (i.e. "let's develop everything from scratch in-house" is a very bad move - you're not in the OSS business). OSS/BSS is a huge and complex topic, so I'm only touching the tip of the iceberg here and speaking mostly in general terms. It's definitely something that will be of greater and greater importance as your network grows, so early planning is key, but don't get carried away trying to automate the hell out of everything because you'll lose focus on what you need to do in the short-term. There is often a naive pursuit of perfection in OSS/BSS by those who haven't been doing it for long enough - don't fall into that trap. I'd start by defining your requirements/scope more solidly and then considering whether it makes sense to try to automate or enhance a particular process. It may help to break things down step-by-step, perhaps based on dependencies or some other logical order, then think about how you would eliminate what you perceive to be manual/error-prone/inefficient/slow/whatever. From a costing perspective, you might find yourself in a (unfortunately frequently encountered by some) situation of "I just spent 50 hours writing a program to automate a task that would have taken me 2 hours to do manually" or "we just spent $50k buying a product which we won't use to any reasonable level of capacity for the next five years". -- Erik *** Remove the _list part in my e-mail address to reply. ***
On Wed, Jan 20, 2010 at 10:52:39PM -0500, Erik L wrote:
One thing that would take a major load off would be if my MRTG system could simply update its config/index files for itself, instead of me having to do it on each and every port change.
Can anyone offer up ideas on how you manage any automation in this regard for their infrastructure gear traffic graphs? (Commercial options welcome, off-list, but we're as small as our budget is).
Not sure how you're doing your graphs currently, but have you considered Cacti?
If automating MRTG config is hard, automating Cacti config is about as close to "impossible" as one can get without popping around to the Augean stables. - Matt
On 1/21/2010 4:29 PM, Matthew Palmer wrote:
On Wed, Jan 20, 2010 at 10:52:39PM -0500, Erik L wrote:
One thing that would take a major load off would be if my MRTG system could simply update its config/index files for itself, instead of me having to do it on each and every port change.
Can anyone offer up ideas on how you manage any automation in this regard for their infrastructure gear traffic graphs? (Commercial options welcome, off-list, but we're as small as our budget is).
Not sure how you're doing your graphs currently, but have you considered Cacti?
If automating MRTG config is hard, automating Cacti config is about as close to "impossible" as one can get without popping around to the Augean stables.
It has been a while, and I have not been following this thread, but once upon a time I had Korn scripts that read (SNMP) devices found and generated MRTG scripts for what it found. Sure enough, I had to go into each one and clean up names and stuff, and occasionally to discard some of what the automaton had created. If I can do that, it seems like anybody can. -- "Government big enough to supply everything you need is big enough to take everything you have." Remember: The Ark was built by amateurs, the Titanic by professionals. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
Once upon a time, Steve Bertrand <steve@ibctech.ca> said:
One thing that would take a major load off would be if my MRTG system could simply update its config/index files for itself, instead of me having to do it on each and every port change.
Is MRTG a requirement, or just some type of statistical monitoring? There are other packages that can do (or be made to do) what you want. I switched from MRTG to Cricket many years ago, and a big improvement there is that you configure interface names (and Cricket handles tracking the index). There are add-ons like genDevConfig (replaces genRtrConfig) that can auto-generate configs for you. The only downside to Cricket is that development has stagnated (I think it is a case of "it works for me" for most everybody using it). There's also Cacti, which is newer and more current. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
This should help with part of what you're doing - snmpstat and cisco config repository. http://snmpstat.sourceforge.net/ On Thu, Jan 21, 2010 at 8:24 AM, Steve Bertrand <steve@ibctech.ca> wrote:
One thing that would take a major load off would be if my MRTG system could simply update its config/index files for itself, instead of me having to do it on each and every port change.
-- Suresh Ramasubramanian (ops.lists@gmail.com)
On Wed, Jan 20, 2010 at 09:54:50PM -0500, Steve Bertrand wrote:
Hi all,
I'm reaching the point where adding in a new piece of infrastructure hardware, connecting up a new cable, and/or assigning address space to a client is nearly 50% documentation and 50% technical.
One thing that would take a major load off would be if my MRTG system could simply update its config/index files for itself, instead of me having to do it on each and every port change.
It is really quite trivial to auto-discover ifindex->ifdescr mappings on every poll cycle then track your interfaces by their names, pretty much every modern poller system can manage this. MRTG is absurdly old, slow, and generally nasty, and should not be used by anyone in this day and age. -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
On 20/01/10 21:54 -0500, Steve Bertrand wrote:
Can anyone offer up ideas on how you manage any automation in this regard for their infrastructure gear traffic graphs? (Commercial options welcome, off-list, but we're as small as our budget is).
Unless something else is out there that I've missed, I'm seriously considering writing up a module in Perl to put up on the CPAN that can scan my RANCID logs (and perhaps the devices directly for someone who doesn't use RANCID), send an aggregate 'are these changes authorized' email to an engineer, and then proceed to execute the proper commands within the proper MRTG directories if the engineer approves.
I use a mix of Cisco/FreeBSD&Quagga for routers, and Cisco/HP for switches, so it is not as simple as throwing a single command at all configs.
OpenNMS works great, but has a steeper learning curve than MRTG. It supports auto discovery of devices, and can pull interface statistics for all new devices/interfaces automatically. I'm graphing all interfaces on around 4 dozen Cisco switches and routers and various other devices on one fairly beefy Linux box. It also has a RANCID integration module, which I haven't had a chance to play with yet.
it is not as simple as throwing a single command at all configs
Actually it is that simple. As long as the device supports the IF-MIB SNMP table, then your SNMP system should have little problem discovering all interfaces. All devices you list above should work, assuming you've got net-snmp running on the freebsd box. -- Dan White
Hi Steve, Our MRTG is fully automated. We ditched cfgmaker (too slow) in favour of our own developed Perl using the Net::SNMP module from CPAN. If you use 'non-blocking' SNMP calls, Net::SNMP can be blisteringly fast. In the case of our routers/switches, we query our NMS (assume this is just a table of hostnames and IP addresses) for a list the devices we want to graph, and then re-generate MRTG configurations a few times a day - meaning that we pick up new devices/port changes automatically. Capital expenditure = $0 :) -- Tom On 21/01/2010, at 1:24 PM, Steve Bertrand wrote: Hi all, I'm reaching the point where adding in a new piece of infrastructure hardware, connecting up a new cable, and/or assigning address space to a client is nearly 50% documentation and 50% technical. One thing that would take a major load off would be if my MRTG system could simply update its config/index files for itself, instead of me having to do it on each and every port change. Can anyone offer up ideas on how you manage any automation in this regard for their infrastructure gear traffic graphs? (Commercial options welcome, off-list, but we're as small as our budget is). Unless something else is out there that I've missed, I'm seriously considering writing up a module in Perl to put up on the CPAN that can scan my RANCID logs (and perhaps the devices directly for someone who doesn't use RANCID), send an aggregate 'are these changes authorized' email to an engineer, and then proceed to execute the proper commands within the proper MRTG directories if the engineer approves. I use a mix of Cisco/FreeBSD&Quagga for routers, and Cisco/HP for switches, so it is not as simple as throwing a single command at all configs. All feedback welcome, especially if you are in the same boat. My IP address documentation/DNS is far more important than my traffic stats, but it really hurts when you've forgotten about a port three months ago that you need to know about now. Steve -- Kind Regards, Tom Wright Internode Network Operations P: +61 8 8228 2999 W: http://www.internode.on.net<http://www.internode.on.net/>
I want to thank everyone who responded on, and off-list to this thread. I've garnered valuable information that ranges within the technical, business applicability, to 'common-sense' arenas. There is a lot of information that I have to go over now, and a few select pieces of software that I'm going to test immediately. One more question, if I may... My original post was completely concerned on automating the process of spinning traffic throughput graphs. Are there any software packages that stand out that have the ability to differentiate throughput between v4/v6, as opposed to the aggregate of the interface? (I will continue reading docs of all recommendations, but this may expedite the process a bit). Steve
On 26/01/2010 00:48, Steve Bertrand wrote:
My original post was completely concerned on automating the process of spinning traffic throughput graphs. Are there any software packages that stand out that have the ability to differentiate throughput between v4/v6, as opposed to the aggregate of the interface? (I will continue reading docs of all recommendations, but this may expedite the process a bit).
That's a feature of the switch you are probing, not the monitoring suite per se. e.g. I have Cisco CPE that does count the difference : bcliffe-gw#sh int accounting | b Vlan1 Vlan1 Wired network VLAN Protocol Pkts In Chars In Pkts Out Chars Out IP 4587251 1137174268 4757409 3669014365 ARP 12595 755700 52409 3144540 IPv6 188872 20699030 223349 131947020 .... but these numbers can not be polled via SNMP, so the only way to graph this device is with an expect script and telnet access. Nice. :-) There is an ipv6MIB, with some interface stats defined under 1.3.6.1.2.1.55.1.6.1, but I do not know of a family of devices which supports this for sure. If your devices support Netflow9, then this -- whilst an extremely heavy/"kitchen sink" approach -- will give you any degree of granularity that you like. Not long term reliable, but if your v6 is presented via a tunnel, you could graph that tunnel interface ? Yuck, yuck (but we did measure some ipv6 traffic use (more than we expected actually) at a recent operational meeting in the UK) Please let us all know if you find something with good v6 snmp support. Andy
Steve Bertrand wrote:
Can anyone offer up ideas on how you manage any automation in this regard for their infrastructure gear traffic graphs? (Commercial options welcome, off-list, but we're as small as our budget is).
By popular request, a list of the most suggested software packages. Some were more related to network management in general as opposed to traffic graphic, but - netdisco which I've already got up and running. Although I've only added a few of our devices so far, I can see already how this will be an extremely valuable multi-purpose tool - rancid which I've been using for quite some time already for config management - cacti which I'm strongly considering installing/testing - opennms which appears that it will duplicate many functions I already have deployed on the network (and that I'm happy with), but I may give it a try anyway. If I don't use it, I've got a few 'on the side' clients that could benefit from this all-inclusive package - snmpstat which I may install and test, if only to look at a replacement for my custom BGP peering alerting system - MRTG, with a custom cfgmaker. This was my original idea. If those who recommended this could/are allowed to share their code, please let me know - netflow v9 the majority of my devices don't support this (unfortunately) - bandwidthd already in use for protocol based statistics. This doesn't run full-time in our network, I usually only drop it into place on a span port if I see sustained extreme increases of traffic on a link - IPPlan been using it for a few years Some software supports IPv6, others don't (or have limited capability). Polling IPv6 accounting isn't possible via SNMP, so using scripts with SSH/Telnet access is the only way around that problem for a lot of gear. Cheers, and thanks! Steve
participants (10)
-
Andy Davidson
-
Chris Adams
-
Dan White
-
Erik L
-
Larry Sheldon
-
Matthew Palmer
-
Richard A Steenbergen
-
Steve Bertrand
-
Suresh Ramasubramanian
-
Tom Wright