In article <ytafhal5xz.fsf@cesium.clock.org>, "Sean M. Doran" <smd@clock.org> wrote:
Why? ping and traceroute are poor predictors of locality and available bandwidth between the originator and target of those tools.
You can get an interesting set of data by sending a few rounds of pings of different sizes. Make a scatterplot of ping packet size vs. delay and find the line that fits the data; its slope represents the inverse of available bandwidth and the y-intercept represents the raw latency. Standard deviation gives you an idea of congestion. Is it crude? Yes, but I don't know of anything better. Is it ugly? Yes, but no uglier than traceroute. This isn't my idea; I saw it in "bing". Once you have a metric coded up, you can have each potential data source measure the quality of their connectivity to the endpoint. After they have agreed amongst themselves (probably via an intermediary server) which server is best they can cause the session to be moved to the most probably optimal server. -- Shields, CrossLink.
participants (1)
-
shields@crosslink.net