In message <200102212028.MAA50344@redpaul.mfnx.net>, Paul A Vixie writes:
Oh god, I hope not. RTT has never been an accurate predictor of end-to-end performance. (Just ask anyone who bought into ping-based global server load balancing.) ASPATH length is almost as bad (as a predictor) as RTT.
well, it's the way icmp_echo is handeld in some vendor routers and sometime also the poor implementation of an IP stack on the echoing device which is a problem.
no, that is not the problem. oh i admit that ping time jitter is ~random. but even if it weren't, RTT doesn't drive performance, (bw*delay)-loss does.
And how does "delay" differ from RTT, except for the obvious constant factor? --Steve Bellovin, http://www.research.att.com/~smb
And how does "delay" differ from RTT, except for the obvious constant factor?
--Steve Bellovin, http://www.research.att.com/~smb
The RTT is the sum of two delays. There is no simple way to separate the two. In general, you don't even know what path the return packets are taking. This same problem applies to trying to measure packet loss and using the results to influence routing decisions. DS
participants (2)
-
davids
-
Steven M. Bellovin