I almost would venture a guess that the whole value in public interconnects is largely dead, due to the restrictive peering polices in place by the larger networks, and the lack of interest/clue in smaller providers.
Several influential people I know of share that view, though with different justifications in some cases. In spite of the EMRED error Randy reposted earlier (A host is a host from coast to coast...) the fact is that at some strata, traffic *does* tend to stay local. Certainly a workstation has more to say to its local file/mail/news/whatever servers, on a counted bit basis, than it has to say to more distant servers of any protocol. (Here, "local" is defined primarily as the LAN but is intended to sweep the "campus" or even "corporate Intranet".) Conventional thinking is that beyond that threshold, if a workstation wants to trade packets with some host outside the local administrative region -- that is, something out on the "public internet" -- that there is no blip on the histogram for servers in a campus or intranet which is topologically close to theirs. The flat rate (per distance if not necessarily connect time or packets sent/received) pricing paradigm that dominates the North American internet market gives no _cost_ incentive for getting something locally if there are also copies of it to be had "long distance." There are, however, performance and reliability incentives for avoiding the long distance links if you know that there is a way to do it and still reach your objective (another copy of GNU Emacs, or an X rated GIF, or whatever.) I would like to know if anyone has measured this one way or the other, since if there is a demonstrated tendancy toward local traffic, it may open some currently-closed minds on the value of joining *hundreds* of regional IXPs and regionalizing our routes so that we can inject a subset into each such IXP without giving anyone unintended transit or subsidizing their long haul costs.