The most direct ways to reduce electrical use in your data center are straightforward: Virtualize multiple servers and storage into fewer racks, invest in newer cooling systems, or ask yourself if all that expensive cooling is even necessary.
But what are some non-obvious things to do, and how do you measure the results?
"It's a multidimensional topic," said Mehdi Paryavi, chairman of International Data Center Authority, which is a training company in Rockville, MD. "One of the reasons that most people don't really build efficient data centers is because they don't really look at it from all the possible angles."
"A lot of times data centers are over-built and over-designed versus your real requirements," Paryavi noted.
SEE: Comparison chart: Virtualization platforms (Tech Pro Research)
For example, even without virtualization, sometimes there are applications or storage on dedicated servers or in entire dedicated racks where it's not needed. Sometimes that is done because of habit—everyone thinks their IT requirements are the most important ones—and other times it's done due to uninformed business decisions without IT staff being told.
"People do tend to go overboard simply because they don't have a clear vision of the future. It's not the fault of the engineer, it's the fault of the business," Paryavi said.
It's not just computer hardware—people also misuse power systems, backup generators, and air cooling units, he added.
"Consolidation is a huge task that needs to be done. We see it all the time—people still go with 30-year-old cooling designs," he said. Sometimes the simplest advice is best: You should put the UPS as close as possible to the hardware that it will power.
"One reason people can fool themselves or fool the industry is because of lack of education... There is a lot of marketing stuff out there that really doesn't make sense to the actual technical world," Paryavi observed.
If the electric bill goes down and user morale remains high, then you're probably doing something right. One way to quantify it is with the international standard known as power usage effectiveness—that's PUE in industry-talk and ISO/IEC 30134-2:2016 if three-letter acronyms bore you. PUE is the ratio of energy used to energy available. The closer the PUE to 1, the better. Most modern data centers have PUE of about 1.2-1.4, he explained. "Anything below 1.5 is efficient," Paryavi said.
SEE: Green tech initiatives: Best practices and breakthroughs (free PDF) (TechRepublic)
PUE began with two Hewlett-Packard engineers in 2005 and gradually gained traction. Now it's largely influenced by a nonprofit organization called The Green Grid, of which many top hardware companies are members. An amendment to the standard is shaping up that would expand PUE measurements to include related hardware that isn't directly inside the data center, such as power units in adjacent rooms, officials from the PUE standards committee explained.
The Green Grid offers other data center energy-saving methods beyond PUE, mostly in the form of white papers. These include topics such as the second edition of the Standard Performance Evaluation Corp.'s Server Efficiency Rating Tool, which is currently awaiting its updated Energy Star endorsement.
- Photos: The 20 greenest data centers in the world (TechRepublic)
- Data centers focus on PUE in their quest to use electricity efficiently (TechRepublic)
- Apple cleared to build $1B green energy data center in Ireland (TechRepublic)
- Custom tech gets a data center closer to the PUE Holy Grail (TechRepublic)
- Singapore forms industry partnership to explore vertical green data centres (ZDNet)
Evan became a technology reporter during the dot-com boom of the late 1990s. He published a book, "Abacus to smartphone: The evolution of mobile and portable computers" in 2015 and is executive director of Vintage Computer Federation, a 501(c)3 non-profit organization. His vices include running and Springsteen.