BGP is relatively good at determining the best path when you a major carrier with connectivity to "everyone" (i.e. when traffic flows "naturally"), in many locations, and you engineer your network so that you have sufficient capacity to support the traffic flows.
In other words, BGP really only works well when most networks are overbuilt so that there is a single uncongested best path through each network from every ingress to every egress and the paths within any given network's core are roughly similar in capacity. Nowadays there is a lot more variability both within networks and between different networks. How can a simple protocol provide optimal behavior between an MPLS network, an IP over ATM network, a network that is half GRE tunnels, and a network that has core links ranging from DS3 to OC48? I think BGP is another example where something that is "good enough" has risen to prominence in spite of the fact that it is not optimal. And another thing. How do we know this problem can ever be solved when we continue to use routing protocols which choose the *BEST* path. The best path is always a single path and, by definition, this is a single point of failure. How can we ever have a diverse and reliable network when its core routing paradigm is a single point of failure? Note that people have built IP networks that provide two diverse paths at all times using multicast http://www.simc-inc.org/archive9899/Jun01-1999/bach2/Default.htm and such things may also be possible with MPLS. But are any of the researchers seriously looking at how to provide a network in which all packets flow through two diverse paths to provide better reliability? --Michael Dillon