From: "Iljitsch van Beijnum"
Are you saying that I shouldn't believe Cisco's own documentation? Obviously, it's going to take _some_ CPU cycles, but I would expect the box to remain operational.
Actually, Cisco's documentation is not always accurate, and it heavily depends on IOS version, train, feature set, and hardware.
One thing to keep in mind is that the S-train platforms are different in handling logging than the normal trains...
Ok, I've been working with Cisco equipment for 8 years now and I can configure them in my sleep, but all the version/image/train/feature set is still voodoo to me. Obviously, the router caches the information it wants to log for a while and then counts hits against the cache until it actually logs. This should work very well, and it does as per my tests on a heavily loaded 4500 router. So why would one type of IOS do this right and another version that isn't immediately recognizable by the version number as inferior do it wrong?
As stated above, it depends on the code. When logging high volume, I recommend turning off all logging facilities except the one you plan to use. Multiple logging facilities will create a multiple effect on the CPU for some trains and versions. ie. logging to console and syslog and running a term mon is a very, very bad thing under heavy logging. This also depends on what you are logging. Narrow the scope as much as possible, ie, log only a narrow customer selection at a time, then try the next.
possible and happily saturate it :( (Don't log on like a 7500 for instance if the packet rates are over like 5kpps...)
I think today's events show that CPU-based routers have no business handling anything more than 1 x 100 Mbps in and 1 x 100 Mbps out. If a box has 40 FE interfaces or 4 GE interfaces, at some point you'll see 4 Gbps coming in so the box must be able to handle it to some usable degree.
Actually, you wouldn't expect to see 4 Gbps comming in. That would be full saturation, which would imply serious performance degregation. Most networks that I've dealt with stick to a 70-80% saturation rule. In addition, many of the problems concerning this traffic weren't throughput issues. Each router has a bandwidth limitation and a pps limitation. The worst DDOS I've had to deal with didn't even show as a bandwidth spike on my circuits but exceeded the pps of the router. Luckily, such attacks are easily dealt with using access-lists as the router is optimized to block more pps than it is designed to switch. This worm had both. The packets were small and the bandwidth utilization was high. Blocking the packets would lower cpu utilization to a manageable degree while the bandwidth usage on each infected circuit was localized to that circuit. Depending on the type of circuit depended on how well it dealt with the loading as different L2 protocols handle saturation differently. ATM is the ideal medium as the latency remains lower than FE or GE at peak saturation. One's responsibility is only to the edge of their controllable network, though. If you can't shut off the ethernet port to an infected server, the customer is responsible for that equipment. Ideally, you have one customer per each circuit that you control. Jack Bates Network Engineer BrightNet Oklahoma