Greetings. How many people here are running their own ping/traceroute/wget tools and feed the resulting responses into some form of database (MRTG, if you will) to get an idea how ping times to certain points in the network are changing over time? It appears that UUnet is selectively blocking such probes (upon major customer request?), in this case: pings from a single host (and only that single host) in my network to 157.130.178.1 , a Seattle border router to Amazon.com . As it appears, the goal is not to prevent denial of service attacks (5 ping packets every 5 minutes is less than the traffic load of an average daily customer of Amazon), but to hinder legitimate QoS analysis of connectivity to Amazon (and as such through UUnet's network itself). I, and certainly others, deem it very necessary to know that major web sites are or have been unreachable or hardly reachable during certain times for customer service and quality-of-service control purposes (yes, this is driving buying decisions for connectivity). Outages/bad performance in multiple directions are a sure indication of larger problems. Are some UUnet border routers so busy that they can't afford even a miniscule amount of ICMP traffic? Blocking it for single source/ destination pairs just doesn't make sense, other than that there is something (QoS-related) to hide. What pisses me off the most is that while Amazon (along with Ebay and a couple of other high-profile sites) may have corporate brain damage bad enough to filter all ICMP, not just rate-limit it or limit it to certain parts of their infrastructure, but a major backbone provider doing the same? Input please. Comments like "well, use another machine, or use http GET instead of ping to measure performance" are not helpful: I am fully capable of doing an end-run around these blocks any day. Thanks, bye,Kai