Michael Dillon is spot on when he states the following (quotation below), although he could have gone another step in suggesting how the distance insensitivity of fiber could be further leveraged:
The high speed fibre in Metro Area Networks will tie it all together with the result that for many applications, it won't matter where the servers are.
In fact, those same servers, and a host of other storage and network elements, can be returned to the LAN rooms and closets of most commercial buildings from whence they originally came prior to the large-scale data center consolidations of the current millennium, once organizations decide to free themselves of the 100-meter constraint imposed by UTP-based LAN hardware and replace those LANs with collapsed fiber backbone designs that attach to remote switches (which could be either in-building or remote), instead of the minimum two switches on every floor that has become customary today. We often discuss the empowerment afforded by optical technology, but we've barely scratched the surface of its ability to effect meaningful architectural changes. The earlier prospects of creating consolidated data centers were once near-universally considered timely and efficient, and they still are in many respects. However, now that the problems associated with a/c and power have entered into the calculus, some data center design strategies are beginning to look more like anachronisms that have been caught in a whip-lash of rapidly shifting conditions, and in a league with the constraints that are imposed by the now-seemingly-obligatory 100-meter UTP design. Frank A. Coluccio DTI Consulting Inc. 212-587-8150 Office 347-526-6788 Mobile On Sat Mar 29 13:57 , sent:
Can someone please, pretty please with sugar on top, explain the point behind high power density?
It allows you to market your operation as a "data center". If you spread it out to reduce power density, then the logical conclusion is to use multiple physical locations. At that point you are no longer centralized.
In any case, a lot of people are now questioning the traditional data center model from various angles. The time is ripe for a paradigm change. My theory is that the new paradigm will be centrally managed, because there is only so much expertise to go around. But the racks will be physically distributed, in virtually every office building, because some things need to be close to local users. The high speed fibre in Metro Area Networks will tie it all together with the result that for many applications, it won't matter where the servers are. Note that the Google MapReduce, Amazon EC2, Haddoop trend will make it much easier to place an application without worrying about the exact locations of the physical servers.
Back in the old days, small ISPs set up PoPs by finding a closet in the back room of a local store to set up modem banks. In the 21st century folks will be looking for corporate data centers with room for a rack or two of multicore CPUs running XEN, and Opensolaris SANs running ZFS/raidz providing iSCSI targets to the XEN VMs.
--Michael Dillon