Amir Michael grabbed everyone's attention at last year's Open Compute Project (OCP) Summit when he said, "Something not often talked about in the industry, there is a trend of people moving from the cloud."
Michael's statement surprised a lot of people, in particular, cloud-service pundits. Still, people paid attention due to his extensive background in data-center infrastructure and the information being captured by Coolan, his startup's TCO (Total Cost of Operation) analytics platform.
Another attention grabber
It will be interesting to learn what people think today, as Michael and data scientists at Coolan are at it again:
"When it comes to power, companies are paying for the infrastructure they need, but they're not using it to its full potential. As a result, they're losing tens of thousands of dollars in the process."
It is not a far reach for Michael and the engineers at Coolan to know whether this is true or not, as one area that Coolan analytics focuses on is server-performance metadata. In the company blog post What's in a Name (Plate Value)?, Michael, along with Dr. Elena Novakovskaia, chief data scientist at Coolan, explain how they came to the above conclusion:
"Using data from a client's deployment as a proxy, we studied the maximum load of power supplies, and their normal and critical output. We then measured actual power consumption and found a huge discrepancy between the load and output at both levels."
Put simply, any discrepancy is money lost.
What is power provisioning?
At first glance, figuring out power provisioning seems relatively simple: just follow the instructions provided by the power supply's manufacturer; however, that is not the case. In the Google white paper Power Provisioning for a Warehouse-sized Computer (PDF), researchers Xiaobo Fan, Wolf-Dietrich Weber, and Luiz André Barroso describe the difficult balancing act data-center managers perform:
"The incentive to fully utilize the power budget of a data center is offset by the business risk of exceeding its maximum capacity, which could result in outages or costly violations of service agreements."
According to Fan, Weber, and Barroso, to obtain optimal performance for the least cost, data-center operators must understand the following power usage characteristics exhibited over time:
- The rated maximum power (or nameplate value) of computing equipment
- Actual consumed power of servers
- Power consumed by differing workloads
With the power usage characteristics in mind, let's look at how Michael and Novakovskaia came to their conclusions.
Findings from Coolan analysis
The first step is to determine what Michael calls the sweet spot for operational efficiency. The sweet spot, according to the engineers at Coolan, ranges from 40% to 80% on the power-efficiency curve. The graph in Figure A depicts the efficiency curve of a second-generation Open Compute power supply.
The graph in Figure B compares average power consumption (gray) and peak power consumption (purple) with nameplate power. "As you can see, a majority of systems are operating below the sweet spot," write Michael and Novakovskaia. "Almost none of the systems are operating at the top of the sweet spot range, where they would be more efficient."
As to what that means, Michael and Novakovskaia explain, "Operating at power levels below the sweet spot results in wasted energy, not to mention additional heat generated by the lower efficiency which increases cooling costs."
The Coolan blog offers the following additional ways to save on data-center power and infrastructure costs:
- Right-sized power supplies are cheaper than larger power supplies, and they also lower a company's CapEx
- Planning for actual loads means you can amortize data center power and cooling infrastructure across more servers
- Running data-center infrastructure at higher loads increases efficiency
Using the example of a company client with 1,600 servers in the client's fleet, and a utility rate of $0.10 per kWh, Michael and Novakovskaia did some quick calculations and discovered the client could save over 300,000 kilowatt hours per year, which translates to over $33,000 USD, if the above suggestions were put in place.
With that kind of money involved, engineers at Coolan are informing their clients to provision based on the infrastructure's actual load instead of using the power supply's nameplate power value. The blog post by Michael and Novakovskaia concludes with the following advice:
"Collecting actual power consumption data for a variety of systems and applications over a meaningful period of time and representing different regimes of operations is important for cost-effective power provisioning at a data center. Such data sets can help, for example, with capacity planning, scheduling upgrades, and choosing better server density and power supply models for a given application type."
- 10 mistakes you might be making with your data center (TechRepublic)
- 10 critical elements of an efficient data center (TechRepublic)
- Data centers becoming more energy efficient, thanks in part to cloud computing (ZDNet)
- Open Compute Project: Gauging its influence in data center, cloud computing infrastructure (ZDNet)
- How to assess the effectiveness of virtual technology investments (Tech Pro Research)
Information is my field...Writing is my passion...Coupling the two is my mission.