[#include: boiler-plate apology for operational content] Google has released its PUE numbers: <http://www.google.com/corporate/datacenters/measuring.html> There is a nice explanation of this, including a graph showing why DC efficiency is more important than machine efficiency (on the second page) at this link: <http://www.datacenterknowledge.com/archives/2008/10/01/google-the-worlds-mos...
I think GOOG deserves a hearty "well done" for getting a whole DC under 1.2 PUE for a whole year. (Actually, I think they deserve a "HOLY @#$@, NO @*&@#-ing WAY!!!") Personally, I think only a self-owned DC could get that low. A general purpose DC would have too many inefficiencies since someone like Equinix must have randomly sized cages, routers and servers, custom-built suites, etc. By owning both sides, GOOG gets a boost. But it's still frickin' amazing, IMHO. For their next miracle, I expect GOOG to capture the waste heat from all their servers and co-gen more electricity to pump back into the servers. :-) -- TTFN, patrick
Personally, I think only a self-owned DC could get that low. A general purpose DC would have too many inefficiencies since someone like Equinix must have randomly sized cages, routers and servers, custom-built suites, etc. By owning both sides, GOOG gets a boost. But it's still frickin' amazing, IMHO.
I wonder what it cost? :-) -M<
On Oct 1, 2008, at 2:04 PM, Martin Hannigan wrote:
Personally, I think only a self-owned DC could get that low. A general purpose DC would have too many inefficiencies since someone like Equinix must have randomly sized cages, routers and servers, custom-built suites, etc. By owning both sides, GOOG gets a boost. But it's still frickin' amazing, IMHO.
I wonder what it cost? :-)
What cost to the environment of not doing it? OK, green hat off. :) Seriously, I doubt GOOG isn't seeing serious savings from this over time. If they weren't why would they do it? -- TTFN, patrick
On Oct 1, 2008, at 2:04 PM, Martin Hannigan wrote:
Personally, I think only a self-owned DC could get that low. A general purpose DC would have too many inefficiencies since someone like Equinix must have randomly sized cages, routers and servers, custom-built suites, etc. By owning both sides, GOOG gets a boost. But it's still frickin' amazing, IMHO.
I wonder what it cost? :-)
What cost to the environment of not doing it?
OK, green hat off. :) Seriously, I doubt GOOG isn't seeing serious savings from this over time. If they weren't why would they do it?
They seem to be very environment focused, so I'm sure doing anything that isn't is subject to scrutiny from the rest of the industry. Hopefully it won't come around to bite them. I had read an article on "The Planet" going as green as possible, then they had the huge outage and I'm sure negated 2-3 times what they had done to that point. Tuc/TBOH
I can't comment on the cost to a large data center, but I just finished renovating our small data center here in New York City by putting in a cogeneration system. We installed a natural gas microturbine, and are recapturing the waste heat to drive an absorption chiller. The rough rule of thumb is 30KW = 10T of chilling. This type of cogeneration is a great match for a communications room because the load is relatively stable, and it supports a dual-mode operation, grid-connect and grid-standalone, allowing you to run completely detached from local utility in the event of an electrical disturbance. No more recip generator backup. As we were the first to do this in New York City, a lot of the cost and delay in the project came in explaining to the city what a microturbine was, how it was going to work, demonstrating to the fire department that it was safer than 500 gallons of diesel on the roof of a mid-rise, etc. The math on the cost savings if you factor in the turbine equipment is pretty quick, a couple of years, but if you look at the cost of the entire project, 10 years is optimistic. However, unlike a traditional backup, you at least /have/ some sort of cost savings to offset the cap investment. I am going to attempt to determine our PUE, using the methodology described in the Google paper. One must figure that "in the spirit it was intended" has to factor in the natural gas consumption, otherwise my PUE would be about 0.1. :) Cheers, David. ------------------------------------------------------------------------ Tuc at T-B-O-H.NET wrote:
On Oct 1, 2008, at 2:04 PM, Martin Hannigan wrote:
Personally, I think only a self-owned DC could get that low. A general purpose DC would have too many inefficiencies since someone like Equinix must have randomly sized cages, routers and servers, custom-built suites, etc. By owning both sides, GOOG gets a boost. But it's still frickin' amazing, IMHO.
I wonder what it cost? :-)
What cost to the environment of not doing it?
OK, green hat off. :) Seriously, I doubt GOOG isn't seeing serious savings from this over time. If they weren't why would they do it?
They seem to be very environment focused, so I'm sure doing anything that isn't is subject to scrutiny from the rest of the industry.
Hopefully it won't come around to bite them. I had read an article on "The Planet" going as green as possible, then they had the huge outage and I'm sure negated 2-3 times what they had done to that point.
Tuc/TBOH
I am going to attempt to determine our PUE, using the methodology described in the Google paper. One must figure that "in the spirit it was intended" has to factor in the natural gas consumption, otherwise my PUE would be about 0.1. :)
If you generate energy for your microturbine from a land fill (free methane gas) your PUE would be nearly zero. Obviously PUE can be skewed and shouldn't be considered as a single metric for anything other than a press release. I would also suggest that Alex shouldn't hold is breath on more details. The details provided are interesting, but without context. (Its like: "Hi, we filter our river water to evaporate it." But are they calculating the cost of all that contaminated material and its disposal? The blowdown on their cooling towers would have to be many times more hazardous than normal, and may require additional treatment to make it safe to release). Is any math being done to decide whether free river/water-side economization is more important (financially/environmentally) than cheap energy inputs? If rather than density, we REDUCE density and build very large foot print data centers that can use ambient air (I've heard rumors that MSFT is using 85 degree air [cool side] in New Mexico) we could get to PUE numbers that were nearly ideal (hot air rises, natural convection, no fans, just PDU overhead, etc). Except where it impacts the bottom line, this all seems more like a fashion show than an actual business plan. Deepak Jain
On Oct 1, 2008, at 5:44 PM, Deepak Jain wrote:
I am going to attempt to determine our PUE, using the methodology described in the Google paper. One must figure that "in the spirit it was intended" has to factor in the natural gas consumption, otherwise my PUE would be about 0.1. :)
If you generate energy for your microturbine from a land fill (free methane gas) your PUE would be nearly zero. Obviously PUE can be skewed and shouldn't be considered as a single metric for anything other than a press release.
I would also suggest that Alex shouldn't hold is breath on more details. The details provided are interesting, but without context.
Indeed. If they would refuse a visit to Cory Doctorow writing for Nature, I don't think we should hold our breath at all : http://www.nature.com/news/2008/080903/full/455016a.html " It doesn't disclose the dimensions or capacity of those data centres. Nature wanted me to visit one for this piece, but a highly placed Googler told me that no one from the press had ever been admitted to a Google data centre; it would require a decision taken at the board level. Which is too bad." Regards Marshall
(Its like:
"Hi, we filter our river water to evaporate it." But are they calculating the cost of all that contaminated material and its disposal? The blowdown on their cooling towers would have to be many times more hazardous than normal, and may require additional treatment to make it safe to release).
Is any math being done to decide whether free river/water-side economization is more important (financially/environmentally) than cheap energy inputs?
If rather than density, we REDUCE density and build very large foot print data centers that can use ambient air (I've heard rumors that MSFT is using 85 degree air [cool side] in New Mexico) we could get to PUE numbers that were nearly ideal (hot air rises, natural convection, no fans, just PDU overhead, etc).
Except where it impacts the bottom line, this all seems more like a fashion show than an actual business plan.
Deepak Jain
On Wed, 1 Oct 2008, Marshall Eubanks wrote:
Date: Wed, 1 Oct 2008 18:10:37 -0400 From: Marshall Eubanks <tme@multicasttech.com> To: deepak@ai.net Cc: NANOG list <nanog@nanog.org> Subject: Re: Google's PUE
I am going to attempt to determine our PUE, using the methodology described in the Google paper. One must figure that "in the spirit it was intended" has to factor in the natural gas consumption, otherwise my PUE would be about 0.1. :)
If you generate energy for your microturbine from a land fill (free methane gas) your PUE would be nearly zero. Obviously PUE can be skewed and shouldn't be considered as a single metric for anything other than a press release.
I would also suggest that Alex shouldn't hold is breath on more details. The details provided are interesting, but without context.
Indeed. If they would refuse a visit to Cory Doctorow writing for Nature, I don't think we should hold our breath at all :
Ack! US$32 for the article. :-) --- Andy Grosser andy [at] meniscus {dot} org ---
Google not counting electricity losses from power cords etc gives the image that it doesn't really want to account everything and want to skew the numbers as much as possible. I would be far more interested in a metric that shows the amount of power used for each MIPS of CPU power (or whatever CPU horsepower metric other than clock speed). And also amount of power used for each gbps of telecom capacity USED. Another metric would be how much power is used to store how many terabytes of data on disk. Disks consume much power too. In the case of Google, the amount of data they spit out to the internet would be a good metric of how much work is being done. So how much power is consumed per gigabyte of data spitted out to the internet would be a good metric of how efficient the data centre is. But for a bank, this would not be a good metric since a lot fo the work doesn't involve sending much data in/out. (because much of the work of a transaction involves a lot of validation/logging) To me, it seems that PUE is just a metric of how efficient the air conditioning is. Not really how energy efficient the whole data centre is for how much work it does.
Google not counting electricity losses from power cords etc gives the image that it doesn't really want to account everything and want to skew the numbers as much as possible.
I don't agree with this. It is commonly accepted that when computing DCIE/PUE, the point of "demarcation" (used that term for the telco crowd) is the receptacle. If they did not include losses in transformation, UPS, distribution, etc., then I would agree. But they seem clear about that in the discussion.
I would be far more interested in a metric that shows the amount of power used for each MIPS of CPU power (or whatever CPU horsepower metric other than clock speed). And also amount of power used for each gbps of telecom capacity USED.
Another metric would be how much power is used to store how many terabytes of data on disk. Disks consume much power too.
I think you mean "energy", not telecom. While what you ask for is very important, that is generally a function of efficiency of a piece of equipment closed to the consumer. In other words, how efficient a Dell is vs. a HP or something. These things do not relate to the definition of PUE/DCIE.
To me, it seems that PUE is just a metric of how efficient the air conditioning is.
This is the point. It's a metric of the FACILITY, not the COMPUTATION.
On Wed, 1 Oct 2008, Tuc at T-B-O-H.NET wrote:
OK, green hat off. :) Seriously, I doubt GOOG isn't seeing serious savings from this over time. If they weren't why would they do it?
They seem to be very environment focused, so I'm sure doing anything that isn't is subject to scrutiny from the rest of the industry.
Personal 747, cough, cough... I'd bet at their scale, they're saving tens if not hundreds of thousands of dollars a month on their data center power bills by optimizing power efficiency. ---------------------------------------------------------------------- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
On Oct 1, 2008, at 4:18 PM, Jon Lewis wrote:
On Wed, 1 Oct 2008, Tuc at T-B-O-H.NET wrote:
OK, green hat off. :) Seriously, I doubt GOOG isn't seeing serious savings from this over time. If they weren't why would they do it?
They seem to be very environment focused, so I'm sure doing anything that isn't is subject to scrutiny from the rest of the industry.
Personal 747, cough, cough...
I'd bet at their scale, they're saving tens if not hundreds of thousands of dollars a month on their data center power bills by optimizing power efficiency.
I think you're being overly conservative. ~500,000 computers. (cough; estimate as of 2006) [1] ~150W per computer (google's are supposedly pretty efficient, this number may be high). that's about 75 megawatts. = 657,000,000 kWh/year (if they're all turned on all the time, which recent articles suggest is probably the case [2]). $65M per year @ $0.10 per kWh. (possibly an over-estimate with datacenters located near hydropower dams). If the average datacenter has a PUE of ~1.9 and google's are at 1.2, that suggests that they're saving something on the order of $10-20M per year with power efficiency efforts. Over a million bucks per month. Not bad. This seems to mesh with the non-scientific, vague claim in the PUE document that "we save hundreds of millions of kWhs of electricity" and "cut our operating expenses by tens of millions of dollars." There's serious money in improving server efficiency. Green feathers in the cap, green in the hand... -Dave [1] http://www.nytimes.com/2006/06/14/technology/14search.html?pagewanted=2&ei=5088&en=c96a72bbc5f90a47&ex=1307937600&partner=rssnyt&emc=rss [2] http://news.cnet.com/8301-11128_3-9975495-54.html
On Oct 1, 2008, at 3:06 PM, Tuc at T-B-O-H.NET wrote:
On Oct 1, 2008, at 2:04 PM, Martin Hannigan wrote:
Personally, I think only a self-owned DC could get that low. A general purpose DC would have too many inefficiencies since someone like Equinix must have randomly sized cages, routers and servers, custom-built suites, etc. By owning both sides, GOOG gets a boost. But it's still frickin' amazing, IMHO.
I wonder what it cost? :-)
What cost to the environment of not doing it?
OK, green hat off. :) Seriously, I doubt GOOG isn't seeing serious savings from this over time. If they weren't why would they do it?
They seem to be very environment focused, so I'm sure doing anything that isn't is subject to scrutiny from the rest of the industry.
Hopefully it won't come around to bite them. I had read an article on "The Planet" going as green as possible, then they had the huge outage and I'm sure negated 2-3 times what they had done to that point.
Tuc/TBOH
The Planet had an outage because something blew up and the fire department made them shut everything down. I wouldn't assume any sort of linkage between efficient design and power savings, except that one way to get very efficient design is to remove redundant components. I don't think Google, or the Planet, or anyone else is doing that, though.
Patrick W. Gilmore wrote:
On Oct 1, 2008, at 2:04 PM, Martin Hannigan wrote:
Personally, I think only a self-owned DC could get that low. A general purpose DC would have too many inefficiencies since someone like Equinix must have randomly sized cages, routers and servers, custom-built suites, etc. By owning both sides, GOOG gets a boost. But it's still frickin' amazing, IMHO.
I wonder what it cost? :-)
What cost to the environment of not doing it?
OK, green hat off. :) Seriously, I doubt GOOG isn't seeing serious savings from this over time. If they weren't why would they do it?
Not talking down this PR release.... Without comparing locations, sizes of floor plates, etc. I am sure Google has more than A-F, so one has to wonder which data centers they left off the map. I think I can submit without proof that a PUE of 1.2 is far more impressive in New Mexico or Arizona than it is in Vancouver, BC since you are essentially measuring the energy to keep the datacenter at temperature throughout seasonal (or external) ambient heat deltas. Likewise, a 10,000 sq ft single customer DC is far less impressive than a 200,000 sq ft general purpose (colo) DC. (they say large scale -- is that number of cores, or sq ft?, but I don't have a number for that). And to address Patrick's rhetorical question - if it costs you $400MM (like a datacenter proposed underground in Japan) to save $15MM / yr in energy costs, one could easily argue that the environment "savings" is not sufficient to overcome the upfront investment. If you spent $40MM in trees (instead of $400MM on an investment to save $15MM/yr), you could argue the environment would be far better off. Deepak
I only quickly read this, but have the following question, should google like to answer it... Of the six datacenters, where are they all physically located? Someone should get on the bandwagon of having a PUE standard that is climate based. A PUE of 1.3 in the Caribbean is way impressive than 1.3 in Quebec. And, why the hell do people use PUE rather than DCIE? DCIE makes more sense. A PUE of 1.15 is DCIE of .86, which is somewhat easier to quantify in ones mind. Translation would be, "for every 100 watts into a site, 86 goes to the critical load." I'd be interested to hear what economization methods they use. And, while they touch on how the water evaporates to cool their datacenters (a la cooling towers), they neglect to tell us how much water is consumed and evaporated (in a heated form) in to the atmosphere. Don't take this as an attack on Google, but there is a lot more to a datacenter efficiency analysis than simple stating your PUE and some other data. For instance, if you have a higher PUE but consume no water, are you more eco-friendly? What about airside vs. waterside economization? Is a higher PUE acceptable if the power generation source is photovoltaic or wind (rather than coal or gas)? Do they do ice storage? If they are they using river water, what does heating that water affect? It's a good topic to talk about (and something I believe NANOG should focus on), but I'd love to see more nuts and bolts in the data from Google.
Google has released its PUE numbers:
<http://www.google.com/corporate/datacenters/measuring.html>
Alex Rubenstein wrote:
I only quickly read this, but have the following question, should google like to answer it...
Of the six datacenters, where are they all physically located?
Based on their job openings at least three are located in Mountain View, CA; The Dalles, OR; and Atlanta, GA. -- Jeff Shultz
The datacenter in Atlanta is located in Suwanee which is north of Atlanta. The Building is operated by Quality Technology Services (www.qualitytech.com). I know since they occupy half of the building. ---------------------- Brian Raaen Network Engineer On Wednesday 01 October 2008, Alex Rubenstein wrote:
I only quickly read this, but have the following question, should google like to answer it...
Of the six datacenters, where are they all physically located?
Someone should get on the bandwagon of having a PUE standard that is climate based. A PUE of 1.3 in the Caribbean is way impressive than 1.3 in Quebec.
And, why the hell do people use PUE rather than DCIE? DCIE makes more sense. A PUE of 1.15 is DCIE of .86, which is somewhat easier to quantify in ones mind. Translation would be, "for every 100 watts into a site, 86 goes to the critical load."
I'd be interested to hear what economization methods they use.
And, while they touch on how the water evaporates to cool their datacenters (a la cooling towers), they neglect to tell us how much water is consumed and evaporated (in a heated form) in to the atmosphere.
Don't take this as an attack on Google, but there is a lot more to a datacenter efficiency analysis than simple stating your PUE and some other data. For instance, if you have a higher PUE but consume no water, are you more eco-friendly? What about airside vs. waterside economization? Is a higher PUE acceptable if the power generation source is photovoltaic or wind (rather than coal or gas)? Do they do ice storage? If they are they using river water, what does heating that water affect?
It's a good topic to talk about (and something I believe NANOG should focus on), but I'd love to see more nuts and bolts in the data from Google.
Google has released its PUE numbers:
<http://www.google.com/corporate/datacenters/measuring.html>
I am really skeptical of this, Patrick. PUE's of between 1.2 and 1.3 - sure. But below 1.2 on an annual basis? And its not even their newest facility. Color me skeptical. - Dan On Oct 1, 2008, at 1:52 PM, Patrick W. Gilmore wrote:
[#include: boiler-plate apology for operational content]
Google has released its PUE numbers:
<http://www.google.com/corporate/datacenters/measuring.html>
There is a nice explanation of this, including a graph showing why DC efficiency is more important than machine efficiency (on the second page) at this link:
<http://www.datacenterknowledge.com/archives/2008/10/01/google-the-worlds-mos...
I think GOOG deserves a hearty "well done" for getting a whole DC under 1.2 PUE for a whole year. (Actually, I think they deserve a "HOLY @#$@, NO @*&@#-ing WAY!!!")
Personally, I think only a self-owned DC could get that low. A general purpose DC would have too many inefficiencies since someone like Equinix must have randomly sized cages, routers and servers, custom-built suites, etc. By owning both sides, GOOG gets a boost. But it's still frickin' amazing, IMHO.
For their next miracle, I expect GOOG to capture the waste heat from all their servers and co-gen more electricity to pump back into the servers. :-)
-- TTFN, patrick
On Oct 2, 2008, at 3:15 PM, Daniel Golding wrote:
I am really skeptical of this, Patrick. PUE's of between 1.2 and 1.3 - sure. But below 1.2 on an annual basis? And its not even their newest facility. Color me skeptical.
I said "NO @*&@#-ing WAY!!!". :) Just presenting info. If someone has more detailed information, or a rebuttal, I bet the collected audience would love to hear it. Personally, I am glad GOOG is posting their PUE. People who talk about additional metrics are correct - more information is better. But some information is better than none, and PUE is a perfectly valid data point. It doesn't measure everything, but that does not make it completely useless. Given Google's history of .. shall we say reticence regarding internal information, it's nice to see SOMETHING from them. So let's encourage it and see if they release more. -- TTFN, patrick
On Oct 1, 2008, at 1:52 PM, Patrick W. Gilmore wrote:
[#include: boiler-plate apology for operational content]
Google has released its PUE numbers:
<http://www.google.com/corporate/datacenters/measuring.html>
There is a nice explanation of this, including a graph showing why DC efficiency is more important than machine efficiency (on the second page) at this link:
<http://www.datacenterknowledge.com/archives/2008/10/01/google-the-worlds-mos...
I think GOOG deserves a hearty "well done" for getting a whole DC under 1.2 PUE for a whole year. (Actually, I think they deserve a "HOLY @#$@, NO @*&@#-ing WAY!!!")
Personally, I think only a self-owned DC could get that low. A general purpose DC would have too many inefficiencies since someone like Equinix must have randomly sized cages, routers and servers, custom-built suites, etc. By owning both sides, GOOG gets a boost. But it's still frickin' amazing, IMHO.
For their next miracle, I expect GOOG to capture the waste heat from all their servers and co-gen more electricity to pump back into the servers. :-)
-- TTFN, patrick
On Thu, 2 Oct 2008, Patrick W. Gilmore wrote:
Personally, I am glad GOOG is posting their PUE. People who talk about additional metrics are correct - more information is better. But some information is better than none, and PUE is a perfectly valid data point. It doesn't measure everything, but that does not make it completely useless. Given Google's history of .. shall we say reticence regarding internal information, it's nice to see SOMETHING from them. So let's encourage it and see if they release more.
Also relevant is this paper: "Power provisioning for a warehouse-sized computer" http://research.google.com/archive/power_provisioning.pdf Tony. -- f.anthony.n.finch <dot@dotat.at> http://dotat.at/ GERMAN BIGHT: WEST 5 OR 6 VEERING NORTHWEST 6 TO GALE 8, BACKING SOUTHWEST 5 OR 6 LATER. ROUGH OR VERY ROUGH. SQUALLY SHOWERS. MODERATE OR GOOD.
participants (15)
-
Alex Rubenstein
-
Andy Grosser
-
Brian Raaen
-
Daniel Golding
-
David Andersen
-
David Birnbaum
-
Deepak Jain
-
Jean-François Mezei
-
Jeff Shultz
-
Jon Lewis
-
Marshall Eubanks
-
Martin Hannigan
-
Patrick W. Gilmore
-
Tony Finch
-
Tuc at T-B-O-H.NET