Data centers use two to five percent of all the power created in the US during any given year. Using 2013 as an example, the Natural Resources Defense Council estimates the amount of power used by US data centers was 91 billion kWh.
One area that’s been a struggle for operators is balancing the number of servers needed to meet peak demand vs. the cost of having servers powered up and standing by in case of a computing surge. Depending on the sources cited the electricity wasted to power servers doing little or no work is 50 to 70% of the total electric bill.
This especially affects smaller commercial data centers that do not have the luxury of fine-tuning their networks. Operators have to deal with a variety of equipment, guarantees of 100% uptime, and irregular load demands. Operators shrug their shoulders and pay the power bill.
The plot thickens
In a past life, I was a contract network engineer. One of the easier assignments was decommissioning client equipment in commercial data centers. Out of curiosity, I would ask the manager how long the device had been out of service. The answer always seemed to be at least a year — yet the device was still powered up. The typical response was, “It’s not our equipment.”
There is now a fitting name for that type of equipment. Jon Taylor, partner at Anthesis Group, a global sustainability consultancy, and Jonathan Koomey, research fellow at Stanford University, just released this report on what they call “comatose servers.” The paper’s authors state, “According to McKinsey and Company, utilization of servers in business and enterprise data centers rarely exceeds six percent (i.e., they deliver no more than six percent of their maximum computing output on average over the course of the year) and up to 30 percent of the servers are comatose — using electricity but delivering no useful information services.”
The Uptime Institute, in the paper Comatose Server Savings Calculator, agrees. “It is estimated that up to 30 percent of the country’s 12 million servers are actually comatose — abandoned by application owners and users but still racked and running, wasting energy and placing ongoing demands on data center facility power and capacity.”
It seems there are now two power-wasting challenges: determining server-pool size to accommodate peak demand and decommissioning servers that are no longer needed.
The large private data-center operations such as Apple, Facebook, Amazon, and Google are working hard to increase efficient use of servers. Facebook, for example, developed Autoscale: technology that sizes the active server pool to current conditions, and ensures each active server is loaded to the optimal level. For its effort, Facebook has realized a 15% increase in power savings.
Academics are looking at this as well — processor serial communications for instance. The serial links that move data between microprocessors and other electronic devices are idle 50 to 70% of the time. I wrote in an April 2015 TechRepublic article about a solution that could save up to 7 percent of a data-center’s power budget by optimizing processor serial connections.
Finding comatose servers
For their research, Taylor and Koomey used data from TSO Logic. A software developer that specializes in capturing and presenting, in a user-friendly manner, information about application performance, capacity, and energy use in a data center.
The dashboard in Figure A exemplifies how the TSO Logic platform queries servers, asking for operational data regarding incoming workload and how the workload correlates with server utilization, performance, and power levels. Using this software, the researchers at TSO Logic are able to identify servers without workloads or incoming traffic.
The big picture
Besides the monetary benefit, more and more potential customers want assurances that the data center’s operation is as green as possible. The sway Greenpeace and its Clicking Clean data-center report card garners is evidence of that.