Greening the data center
November 27, 2007, 9:12am PST | Length: 00:05:23
John O'Brien, CTO of Dataupia, explains how carbon footprints are calculated in the data center and discusses ways to tame these power-hungry machines.
Hello. My name is John O'Brien, and I'm the CTO of Dataupia, and today I'm here to talk to you about "Greening the Data Center."
Greening has definitely gone from something that's a nice to have, kind of trendy thing to something that's really become business critical today. Some of the factors that have led to this are: our data centers are actually getting full; our co lo limits, even though we may outsource our data center to a co location, we actually run into limitations on the amount of power we're able to utilize there.
Then, the third effect really has come around the fact that commodity pricing within the data center has lowered the cost to the point that now the lifetime operating cost of that piece of equipment is actually higher than the acquisition cost. These three trends have really made something that's a nice to have trend to something that's part of our everyday life.
Let's start off by taking a look at a new industry standard for the greening equation. This is being referred to as the "compute density equation," which has got three factors. It's got our P on one axis, which is our power being consumed by the equipment; our C, being the cooling that is required for that piece of equipment to be operational; and then S, which is actually going to be the amount of space that is being utilized by that piece of equipment in our data center.
These three factors now we represent, what we're coming to call within the data center this area, and some people are calling it the "power footprint," which represents taking all of these factors into account for the very first time.
Now, let's take a look at some of the trends in the industry that are actually leveraging this compute density footprint. The first that we're most familiar with are blades, and blades are all about increasing the amount of density within our data center. The second trend that we've probably all heard about is virtualization, and what virtualization is trying to do is actually increase, now, the utilization within the data center.
Now, both of these are all about coming back to our equation and reducing the space required by increasing density or the utilization of our equipment.
The third trend that we want to take a look at today is something that we're calling the "data warehouse appliance trend." As an example, let's take a look at a data warehouse solution that most of our companies have today. In our example, we're going to take it looked at a two terabyte data warehouse solution, and we're going to say that within the system now, when we take a look at it, we have a database component, we have our fiber channel that connects through a SAN switch here, and then we've got our SAN component.
When we take a look at our compute density equation, what we're going to see is that this has a high power rating of 24,900 watts; it has 25,500 watts required just in cooling alone; and then the space requirements for two terabytes of storage within a SAN is typically around two cubic feet. So, with this architecture, which is what we traditionally have today, we can see that we have some pretty high power ratings and some fixed amount of space.
Now, the next trend that we're having with data warehouse appliances today is that we still have our database connectivity, but now what we're using is TCP/IP network connection, and we're using our standardized network switches.
With that, we have our data warehouse appliances here, which then leverage this network infrastructure to talk to the database. The power rating for this guy is only 300 watts per two terabytes. The cooling required is very similar at another 300 watts, and the footprint required for that, in space, is 1.3 cubic feet.
So, if we want to understand why we're so much greener over here with data warehouse appliances, let's take a look at the entire system, and probably where the two highest factors come in. The difference in the architecture really lends itself to the fact that this SAN switch here is a very high power consumption device compared to a standard network switch. So, right here alone, this section is causing us a lot of cost in power and cooling for the same two terabytes of storage.
The reason why we have this type of an approach with architecture is the fact that within a SAN, we're looking at data blocks that have to be shipped back and forth to the data center along very wide, high bandwidth fiber channel and very high backbone SAN switches.
Whereas with the data warehouse architecture with the appliances, we're talking about an MPP architecture in which the computing is done actually down at the storage level, and the only thing that's getting returned up to the database server is actually the result set, which is much more efficient on the infrastructure: lower bandwidth requirements, as well as smaller switches are required. So, the overall system cost, from a compute density, is much lower with data warehouse appliances than it is in our traditional SAN storages.
As you've seen from the example, the compute density equation now provides you a new way to take a look at "Greening the Data Center."