Have there been any fundamental change in their network architecture that might explain pulling these caches?
Maybe not network architecture, but what if the cache-to-content ratio is dropping dramatically due to changes in consumer behavior and/or a huge increase in the underlying content (such as adoption of higher and multiple-resolution videos)? There has to be a tipping point at which a proportionally small cache becomes almost worthless from a traffic saving perspective. If you run a cluster one presumes you can see what your in/out ratio looks like and where the trend-line is headed. Another possibility might be security. It may be that they need additional security credentials for newer services which they are reluctant to load into remote cache clusters they don't physically control. Mark.