It's interesting that many rather sizable networks have weathered these events without relying on filtering, NAT, or other such behavior.
What's more interesting is how many big networks have implemented 98-byte ICMP filters, blocks on port 135, and other filters on a temporary basis on one or more (but not all) interfaces, without anyone really noticing that they're doing that. It isn't something that's well-publicized, but I know several major ISPs/NSPs which have had such filters in place, at least briefly, on either congested edge interfaces or between core and access routers to prevent problems with devices like TNTs and Shastas.
Even if you're right, that doesn't make me wrong.
True enough.
Any IP network conformant to Internet standards should be content transparent. Any network which isn't is broken.
Then they're all broken, to one extent or another. Even a piece of wire can be subjected to a denial of service attack that prevents your content from transparently reaching the far end.
Breaking under abnormal conditions is unacceptable. I am well aware of reality, but the reality is: some things need to be improved.
That some thing need to be improved has been true since the very first day the Internet began operation. Of course, the users of the end systems were somewhat better behaved for the first few years, and managed to resist the temptation to deploy widespread worms until 1988.
This isn't some fundamental law of nature causing these limits. We are simply seeing the results of the "internet boom" valuation of rapid growth and profit over correctness and stability.
True.
As the purchasers of this equipment we have the power to demand vendors produce products which are not broken.
One can demand all one wants. Getting such a product can be nearly or totally impossible, depending on which features you need at the same time.
Doing so is our professional duty, settling on workarounds that break communications and fail to actually solve the problems is negligent.
But not using the workarounds that one has available in order to keep the network mostly working, and instead standing back and throwing up one's hands and saying "well, all the hardware crashed, guess our network is down entirely today" is even more negligent. It may also be a salary-reducing move.
Suggesting that breaking end-to-endness is a long term solution to these kind of issues is socially irresponsible.
Waiting until provably-correct routers are built, and cheap enough to deploy, may be socially irresponsible as well. There's a whole lot of good that has come out of cheap broadband access, and we'd still be waiting if we insisted on bug-free CPE and bug-free aggregation boxes that could handle any traffic pattern thrown at them. Do you actually believe that it was a BAD idea for Cisco to build a router that is more efficient (to the point of being able to handle high-rate interfaces at all) when presented with traffic flows that look like real sessions? Matthew Kaufman matthew@eeph.com