Google has developed a way to coordinate the power required by a server's processor with the workload issued to the server.
One of the power-conservation challenges facing data-center management is balancing workload with the number of servers required to do the work. Most data-center managers error on the conservative side, keeping more servers online than needed — just in case. However, that wastes power.
Facebook, this past summer, created Autoscale: a load-balancing technology developed to reduce server inefficiency. Autoscale, the Open Compute technology, already controls production web clusters, netting Facebook a 15% increase in power savings.
Google followed a different approach. Teaming up with researchers from Stanford University, Google fellows looked for ways to reduce the energy footprint of Google's Warehouse-Scale Computer (WSC) systems. The paper announcing their findings, Towards Energy Proportionality for Large-Scale Latency-Critical Workloads, begins by describing their challenge: "The lack of energy proportionality of typical WSC hardware and the fact that important workloads (such as search) require all servers to remain up regardless of traffic intensity renders existing power-management techniques ineffective at reducing WSC energy use."
The researchers also conclude:
- WSC systems are most efficient when fully utilized (Facebook's conclusion as well).
- When running continuous batch workloads, WSC systems average 75% efficiency.
- If workloads are mixed, the efficiency of WSC systems vary from 10% to 50%.
The researchers now have a goal, "Improve (WSC systems) energy efficiency at low and moderate usage."
After looking at several possibilities that fell short of the team's goal, the group devised a way using iso-latency: a software policy that manipulates Running Average Power Limit (RAPL), a power-management feature built into Intel processor chips.
Their solution: PEGASUS
Power and Energy Gains Automatically Saved from Underutilized Systems (PEGASUS) — the iso-latency policy and RAPL combination — allows Google to match processor power consumption to what is required for a given workload and quickly enough to eliminate the need to overpower the processors. The research paper defines PEGASUS as a "dynamic, feedback-based controller that enforces the iso-latency policy."
To help explain how PEGASUS works, the research paper offers the following analogy: "The baseline can be compared to driving a car with sudden stops and starts. Iso-latency would then be driving the car at a slower speed to avoid accelerating hard and braking hard. The second way of operating a car is much more fuel efficient than the first, which is akin to the results we have observed."
PEGASUS also departs from conventional thinking; that conservation of energy means idling or turning off servers. The researchers feel this is the wrong approach and inefficient, explaining, "Even if spare memory storage is available, moving tens of gigabytes of state in and out of servers is expensive and time consuming, making it difficult to react to fast or small changes in load."
Multiple tests were run using the most challenging workload the researchers could think of: search. The first tests used small clusters (tens of servers). The results were encouraging. PEGASUS garnered a 30% savings over baseline.
The next series of tests involved a full-scale production cluster (thousands of servers) used for Google search. The power savings varied from 10% to 20% (Figure B).
The researchers quickly ascertained why moving to a production cluster reduced efficiency: forcing all the cluster servers to use the same power limit created a bottleneck. Apparently, the amount of work to complete search requests is not uniform, so some servers finish before others, yet are running at the higher power setting.
That led the team to use what they call Distributed PEGASUS. Instead of one instance of PEGASUS controlling all nodes, each server incorporates a PEGASUS controller. The team estimates a power savings of 35% (Figure C). It's estimated, because the researchers were unable to evaluate Distributed PEGASUS in time for the paper's publishing deadline.
In a rather stoic manner, the team offers this conclusion: "Overall, iso-latency provides a significant step forward towards the goal of energy proportionality for one of the challenging classes of large-scale, low-latency workloads."
It is a complicated process. Other experts I shared this post with mention it will take an organization like Google to make PEGASUS work. However you look at it, power savings of 35% per server is significant considering the number of servers Google owns.
Note: All slides and graphs are courtesy of the paper's authors, Google, and Stanford University.