Here is a little hint - most distributed applications in traditional jobsets, tend to work best when they are close together. Unless you can map those jobsets onto truly partitioned algorithms that work on local copy, this is a _non starter_.<
I thought that I had made my view clear in this respect in my earlier postings. When moving to the kinds of optical extension techniques I outlined earlier in this thread, I can't be too emphatic in noting that one size does not fit all. And while I appreciate the hint, you probably don't realize it yet, but in some ways you are helping to make my argument. Consider for a moment my initial point (my main point, in fact) concerning moving all of the LAN gear in an enterprise building out to the cloud (customer-owned data center or colo, machs niches). My contention here is that, this not only eliminates LAN rooms and all the switches and environmentals that fill them, but it radically reduces the bulk of the gear normally found in the building's telecom center, as well, since hierarchical routing infrastructure, and severs of most types that are used for networking purposes (along with their associated power and air provisions to keep all of them going) would also no longer have a place onsite, either. Rather, these elements, too, could be centrally located in an offsite data center (hence reducing their overall number for multi-site networks) _or _in _a _colo _ OR in one of the enterprise's other sites where it makes sense. "OR IN A COLO" is especially relevant here, and in some ways related to the point you are arguing. There are many enterprises, in fact, who have already, over their own dark fibernets (now lit, of course) and leased optical facilities, long since taken the steps to move their server farms and major network node positions to 111-8th, 611 Wilshire, Exodus and scores of other exchange locations around the world, although most of them, up until now, have not yet taken the next precipitous step of moving their LAN gear out to the cloud, as well. Of course when they do decide to free the premises of LAN gear, so too will they obviate the requirement for many of the routers and associated networking elements in those buildings, too, thus streamlining L3 route administration within the intranet, as well. I should emphasize here that when played properly this is NOT a zero sum game. By the same token, however, what we're discussing here is really a situational call. I grant you, for instance, that some, perhaps many, jobs are best suited to having their elements sited close to one another. Many, as I've outlined above do not fit this constraint. This is a scalar type of decision process, where the smallest instance of the fractal doesn't require the same absolute level of provisioning as the largest, where each is a candidate that must meet a minimum set of criteria before making full sense.
lets assume we have abundant dark fiber, and a 800 strand ribbon fiber cable costs the same as a utp run. Can you get me some quotes from a few folks about terminating and patching 800 strands x2?<
This is likely an rhetorical question, although it needn't be. Yes, I can get those quotes, and quotes that are many times greater in scope, and have for a number of financial trading floors and outside plant dark nets. It wasn't very long ago, however, when the same question could still be asked about UTP, since every wire of every pair during those earlier times required the use of a soldering iron. My point being, the state of the art of fiber heading, connectorization and splicing continues to improve all the time, as does the quality and costing of pre-connectorized jumpers (in the event your question had to do with jumpers and long cross-conns).
There is a reason most people, who are backed up by sober accountants, tend to cluster stuff under one roof.<
Agreed. Sometimes, however, perhaps quite often in fact, one can attribute this behavior to a quality known more commonly as bunker mentality. -- When I closed my preceding message in this subthread I stated I would welcome a continuation of this discussion offlist with anyone who was interested. Since then I've received onlist responses, so I responded here in kind, but my earlier offer still holds. Frank A. Coluccio DTI Consulting Inc. 212-587-8150 Office 347-526-6788 Mobile ---------- On Mon Mar 31 9:53 , "vijay gill" sent:
On Sat, Mar 29, 2008 at 3:04 PM, Frank Coluccio <frank@dticonsulting.com> wrote:
Michael Dillon is spot on when he states the following (quotation below),
although he could have gone another step in suggesting how the distance
insensitivity of fiber could be further leveraged: Dillon is not only not spot on, dillon is quite a bit away from being spot on.
Read on.
The high speed fibre in Metro Area Networks will tie it all together
with the result that for many applications, it won't matter where
the servers are.
In fact, those same servers, and a host of other storage and network elements,
can be returned to the LAN rooms and closets of most commercial buildings from
whence they originally came prior to the large-scale data center consolidations
of the current millennium, once organizations decide to free themselves of the
100-meter constraint imposed by UTP-based LAN hardware and replace those LANs
with collapsed fiber backbone designs that attach to remote switches (which could
be either in-building or remote), instead of the minimum two switches on every
floor that has become customary today. Here is a little hint - most distributed applications in traditional jobsets,
tend to work best when they are close together. Unless you can map those jobsets onto truly partitioned algorithms that work on local copy, this is a _non starter_.
We often discuss the empowerment afforded by optical technology, but we've barely
scratched the surface of its ability to effect meaningful architectural changes. No matter how much optical technology you have, it will tend to be more
expensive to run, have higher failure rates, and use more power, than simply running fiber or copper inside your datacenter. There is a reason most people, who are backed up by sober accountants, tend to cluster stuff under one roof.
The earlier prospects of creating consolidated data centers were once
near-universally considered timely and efficient, and they still are in many
respects. However, now that the problems associated with a/c and power have
entered into the calculus, some data center design strategies are beginning to
look more like anachronisms that have been caught in a whip-lash of rapidly
shifting conditions, and in a league with the constraints that are imposed by the
now-seemingly-obligatory 100-meter UTP design.
Frank, lets assume we have abundant dark fiber, and a 800 strand ribbon fiber
cable costs the same as a utp run. Can you get me some quotes from a few folks about terminating and patching 800 strands x2?
/vijay
Frank A. Coluccio
DTI Consulting Inc.
212-587-8150 Office
347-526-6788 Mobile
On Sat Mar 29 13:57 , sent:
Can someone please, pretty please with sugar on top, explain
the point behind high power density?
It allows you to market your operation as a "data center". If
you spread it out to reduce power density, then the logical
conclusion is to use multiple physical locations. At that point
you are no longer centralized.
In any case, a lot of people are now questioning the traditional
data center model from various angles. The time is ripe for a
paradigm change. My theory is that the new paradigm will be centrally
managed, because there is only so much expertise to go around. But
the racks will be physically distributed, in virtually every office
building, because some things need to be close to local users. The
high speed fibre in Metro Area Networks will tie it all together
with the result that for many applications, it won't matter where
the servers are. Note that the Google MapReduce, Amazon EC2, Haddoop
trend will make it much easier to place an application without
worrying about the exact locations of the physical servers.
Back in the old days, small ISPs set up PoPs by finding a closet
in the back room of a local store to set up modem banks. In the 21st
century folks will be looking for corporate data centers with room
for a rack or two of multicore CPUs running XEN, and Opensolaris
SANs running ZFS/raidz providing iSCSI targets to the XEN VMs.
--Michael Dillon
participants (1)
-
Frank Coluccio