"Murphy, Brennan" wrote:
what is the easiest way to detect if the bandwidth in use is falling into a classic Tail Drop pattern? According to a Cisco book I am reading, the bandwidth utilization should graph in a "sawtooth" pattern of gradual increases in accordance with multiple machines gradually increasing
It may be difficult to tell at any one router. Depending on where the endpoints are, and I'm assuming they are scattered around the net, some connections may lose packets at different times and places on the net. If that one OC3 or DS3 link were the only thing that mattered, perhaps it would be easier to tell.
via TCP slow start and then sharp drops. Will this only happen when the utilization approaches 100%. (maybe dumb question)
Again I think it depends, but I would venture to say: probably.
Should I be able to do a show buffers and see misses or is there some better way to detect other than via graphing?
I haven't studied those statistics on a Cisco, but they may tell you something, but I would suspect it would be difficult to discern what you're looking for based on them alone. Another parameter to monitor is packet drops.
Also, suppose in examining my ftp traffic patterns that I noticed that it spikes at 15minutes after the type of the hour, consistently, etc. Could I create a timed access list to only kick in at that time?
I guess you could, but that seems to be a very short and narrow minded approach to managing your capacity.
I usually take these types of questions to Cisco but I thought I'd post it to this list to get any generic real world advice.
Based on your 'show buffers' output, Cisco may recommend some tuned buffer settings for you. John