Greenpeace International does more than patrol the oceans — the organization also keeps a close watch on data centers, making sure the operators use the Earth's resources as efficiently as possible. In Greenpeace's April 2014 report Clicking Clean: How Companies are Creating the Green Internet (PDF), it rated Apple, Google, and Facebook as much improved. In that same report, Greenpeace praised Facebook for its transparency in energy use and Open Compute Project.
Open Compute Project
In 2009, three engineers at Facebook took on a rather daunting challenge: design a data center from the ground up with an emphasis on being as efficient and economical as possible. Two years later, Facebook broke ground on one of the most efficient computing facilities in existence — Facebook's data center in Prineville, OR. The Prineville campus can accomplish the same amount of work as other Facebook data centers, using 38% less energy — a significant savings.
In this blog post, Jonathan Heiliger, vice president of Facebook Technical Operations at that time, referred to some of the unique ideas the team put in place to obtain that kind of improvement:
- 480-volt-in plant electrical distribution system to reduce energy loss
- Remove anything in our servers that didn't contribute to efficiency
- Reuse hot aisle air in winter to heat the offices and the outside air flowing into the data center
- Eliminate the need for a central uninterruptible power supply
Heiliger wrote, in the same blog, the team of engineers felt it important to share their innovations publicly in a manner similar to open-source software. To that end, Facebook created the Open Compute Project: "An industry-wide initiative to share specifications and best practices for creating energy efficient data centers."
Open Compute Project Server
One of the first places engineers looked to optimize was server hardware and the equipment supporting servers. Facebook engineers in the article Inside the Open Compute Project Server provided insight on how they redesigned server hardware from the chassis on up.
Chassis: Facebook removed all unnecessary hardware, even mounting screws in order to lower cost and weight. Something most do not think about is how much a server weighs. Streamlining the server's insides removed six pounds, saving on shipping cost and data center technicians' backs.
Motherboard: The motherboard includes a direct interface for the power supply, no extraneous expansion slots, and is designed to reboot over the LAN. Facebook has two motherboard types: one for Intel processors, and one for AMD processors.
Power supply: According to Facebook, the power supply was the biggest challenge, yet, if redesigned correctly would realize the largest savings. Facebook's new power supply has an efficiency rating of 95%.
The Open Compute power supply at the right has two connectors: one for 48 VDC (UPS) and one for 277 VAC (primary). The higher primary voltage of 277 VAC reduces transmission losses due to "I squared R."
Facebook engineers also took a hard look at rack design and how to provide power efficiently when the primary AC source was interrupted.
Instead of normal individual racks, Facebook uses what it calls triplet racks, combining three rack cabinets into one. Each triplet rack holds 90 servers. That amount mates up with Facebook's network device, which has 90 connections.
In a move that will garner smiles from those who have installed equipment in racks, Facebook has done away with the rail system, using trays. Just slide the server in, and engage a spring-loaded plunger to hold the server in place.
Battery cabinet (UPS)
Facebook battery backup systems for servers are only required to last 45 seconds; Facebook engineers say the building's generators will be online by then. Each battery cabinet serves two triplet racks, includes an AC/DC rectifier to charge the batteries, and an automated impedance-testing system that monitors each battery alerting technicians if one needs to be replaced. The Facebook article about the Open Compute Project server notes: "This alternative battery system is 99.5 percent efficient (we use some energy to charge batteries) when compared to the 90-95 percent efficiency associated with industry standard UPS systems."
Facebook understands that electricity used for anything other than computing affects the data center's Power Usage Effectiveness (PUE), and cooling is a major part of the "other than computing" category. To improve how a server is cooled, Facebook standardized on a 1.5U (2.63 inches) server chassis height. This allowed larger heat sinks and larger fans to fit in the chassis, increasing cooling efficiency, and reducing the amount of electricity needed to spin the fans. Also, larger fans are significantly quieter; something people who spend any amount of time in data centers will appreciate.
Moving in the right direction
Experts talk of data centers requiring more electricity than the average size town. No doubt, that is why Greenpeace is monitoring data centers, applying pressure when need be to those that are not onboard with conserving resources.
Are you taking steps to make your data center more energy efficient? If so, what have you done so far? Let us know in the discussion.
- 10 things you should know about Open Compute
- Data centers focus on PUE in their quest to use electricity efficiently
- A holistic metric pits energy demand vs. supply to improve data center efficiency
- 10 questions on building a green datacenter: An interview with Anthony Abbattista
- Photos: How Apple, Google, Facebook and Microsoft data centers are using clean energy
Information is my field...Writing is my passion...Coupling the two is my mission.