The current high watt cooling technologies are definately more expensive (much more). Also, a facility would still need traditional forced to maintain the building climate. tv ----- Original Message ----- From: "Todd Glassey" <tglassey@earthlink.net> To: "Tony Varriale" <tvarriale@comcast.net>; <nanog@merit.edu> Sent: Wednesday, January 24, 2007 2:09 PM Subject: Re: Colocation in the US.
If the cooling is cheaper than the cost of the A/C or provides a backup, its a no brainer.
Todd Glassey
-----Original Message-----
From: Tony Varriale <tvarriale@comcast.net> Sent: Jan 24, 2007 11:20 AM To: nanog@merit.edu Subject: Re: Colocation in the US.
I think the better questions are: when will customers be willing to pay for it? and how much? :)
tv ----- Original Message ----- From: "Mike Lyon" <mike.lyon@gmail.com> To: "Paul Vixie" <vixie@vix.com> Cc: <nanog@merit.edu> Sent: Wednesday, January 24, 2007 11:54 AM Subject: Re: Colocation in the US.
Paul brings up a good point. How long before we call a colo provider to provision a rack, power, bandwidth and a to/from connection in each rack to their water cooler on the roof?
-Mike
On 24 Jan 2007 17:37:27 +0000, Paul Vixie <vixie@vix.com> wrote:
drais@atlasta.net (david raistrick) writes:
I had a data center tour on Sunday where they said that the way they provide space is by power requirements. You state your power requirements, they give you enough rack/cabinet space to *properly* house gear that consumers that
"properly" is open for debate here. ... It's possible to have a facility built to properly power and cool 10kW+ per rack. Just that most colo facilties aren't built to that level.
i'm spec'ing datacenter space at the moment, so this is topical. at 10kW/R you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R by requiring a lot of aisleway around every set of racks (~200sf per 4R cage) to get it down to 200W/SF, or you'd compromise on W/R. i suspect that the folks offering 10kW/R are making it up elsewhere, like 50sf/R averaged over their facility. (this makes for a nice-sounding W/R number.) i know how to cool 200W/SF but i do not know how to cool 333W/SF unless everything in the rack is liquid cooled or unless the forced air is bottom->top and the cabinet is completely enclosed and the doors are never opened while the power is on.
you can pay over here, or you can pay over there, but TANSTAAFL. for my own purposes, this means averaging ~6kW/R with some hotter and some colder, and cooling at ~200W/SF (which is ~30SF/R). the thing that's burning me right now is that for every watt i deliver, i've got to burn a watt in the mechanical to cool it all. i still want the rackmount server/router/switch industry to move to liquid which is about 70% more efficient (in the mechanical) than air as a cooling medium.
It's a good way of looking at the problem, since the flipside of power consumption is the cooling problem. Too many servers packed in a small space (rack or cabinet) becomes a big cooling problem.
Problem yes, but one that is capable of being engineered around (who'd have ever though we could get 1000Mb/s through cat5, after all!)
i think we're going to see a more Feinman-like circuit design where we're not dumping electrons every time we change states, and before that we'll see a standardized gozinta/gozoutta liquid cooling hookup for rackmount equipment, and before that we're already seeing Intel and AMD in a watts-per-computron race. all of that would happen before we'd air-cool more than 200W/SF in the average datacenter, unless Eneco's chip works out in which case all bets are off in a whole lotta ways. -- Paul Vixie