Abstractions and analogies aside, is this really a problem, and is it really worth solving? Sounds like a lot of additional complexity for the supposed benefits.
A key comment to put this paper in perspective: (from http://news.com.com/2100-1033-984694.html?tag=fd_top )
"The extent to which the real Internet conforms to these mathematical models is not yet well understood," Roughgarden said.
In particular, BGP does uses neither latency not congestion as a parameter to choose a path. (assuming that conditions are such that the BGP session can stay up, of course ;). Widespread deployment of a pure-performance criteria for path selection could indeed have the type of problem described. Standard BGP flap dampening mechanisms would mitigate, but not eliminate, the potentially negative effects described. Chasing the last ms of optimization tends to both focus traffic on the single "best" link, as well as increasing the rate of route change as the "best" continually changes. Considering alternate paths with roughly similar performance significantly changes the picture. This not only reduces the required rate of route change, but also tends to spread the load across the range of valid (near-optimal) paths, and thus significantly mitigates the concerns raised in the paper. cheers -- Sean