Anyways, initial reports are that as per my advice, customer calls vendor says "voip not working" vendor says "i changed something, wont tell you what, reboot everything in 30" and now things seem to work perfectly, strangely enough EVEN the traceroutes.
This is obviously not best effort. Best guess would be "managed bandwidth" differentiated by ip ranges and that the "change" was a different pool assignment.
its hard to say. could be that a peering connection was down or congested, that cold-potato routing within said provider was suboptimal, there are any number of rational reasons other than "managed bandwidth".
I suspect the stellar icmp echo performance is also intentional.
as stated previously, eliciting a response out of a router through "icmp processing" is vastly different to the standard process of forwarding a packet. there are any number of countless reasons why icmp-ttl-exceeded response times can be vastly-over or vastly-under the actual round-trip-time of a packet. if you still don't believe, do a search for "Cisco Control Plane Policing" or CoPP. other vendors have similar mechanisms also.
Compare: tcptraceroute lsvomonline.dnsalias.com -q 5 -w 1 80 -f 7 Selected device eth0, address 192.168.0.3, port 33204 for outgoing packets Tracing the path to lsvomonline.dnsalias.com (82.166.56.247) on TCP port 80 (www), 30 hops max 7 kar2-so-7-0-0.newyork.savvis.net (204.70.150.253) 45.008 ms 52.978 ms 32.404 ms 50.676 ms 33.657 ms 8 dcr3-ge-0-2-1.newyork.savvis.net (204.70.193.98) 49.037 ms 33.145 ms 48.029 ms 34.355 ms 48.453 ms [..]
using tcptraceroute in this manner is NO DIFFERENT to normal traceroute. the routers in the intermediate hops are still essentially doing icmp-ttl-exceeded behaviour, so the same "can't read anything into the latency" statements i've made a few times now. in either case, its good to hear you have your issue resolved. cheers, lincoln.