Just to close this issue on the list: a (top) engineer from AT&T contacted me offline and helped us out. Turns out that 12.88.71.13 is located in Kansas City and tbr1.sl9mo.ip.att.net (12.122.112.22) is in St. Louis. AT&T has two L1 connections to that site for redundancy, but traffic was flowing over the longer loop. The engineer tweaked route weights so that the traffic prefers to flow over the shorter link to tbr2.sl9mo.ip.att.net (12.122.112.78), shaving about 12 msec. He also explained that the jump of ~70 msec is due to how ICMP traffic within MPLS tunnels is handled. It wasn't until I ran a traceroute from a Cisco router that I even saw the MPLS labels (that included in the ICMP responses) for each of the hops within the tunnel. Apparently each ICMP packet within an MPLS tunnel (where TTL decrementing is allowed) is sent to the *end* of the tunnel and back again, so my next "hop" to tbr1.sl9mo.ip.att.net (12.122.112.22) was really showing the RTT to the end of the tunnel, Los Angeles. Frank -----Original Message----- From: Frank Bulk [mailto:frnkblk@iname.com] Sent: Thursday, June 26, 2008 5:52 PM To: nanog list Subject: Possible explanations for a large hop in latency Our upstream provider has a connection to AT&T (12.88.71.13) where I relatively consistently measure with a RTT of 15 msec, but the next hop (12.122.112.22) comes in with a RTT of 85 msec. Unless AT&T is sending that traffic over a cable modem or to Europe and back, I can't see a reason why there is a consistent ~70 msec jump in RTT. Hops farther along the route are just a few msec more each hop, so it doesn't appear that 12.122.112.22 has some kind of ICMP rate-limiting. Is this a real performance issue, or is there some logical explanation? Frank