Data Centers

Practical power-saving tips for IT pros

Rick Vanover offers eight practical power-saving tips for datacenters that are on the brink of their power capacity.

We all deal with some form of datacenter or office that is at the brink of power capacity. I can’t tell you how many times I bump into situations where a computer room can’t accommodate another server or storage device due to lack of available power in the facility, let alone deliver enough cooling capacity. Over the years, I have created a few tips to save power. Here are some of my tips:

Virtualize: There is no single more effective power reduction strategy for the datacenter than to consider server virtualization. While the hosts (VMware ESXi or Microsoft Hyper-V) may be larger and consume more power per server unit than traditional physical servers, high consolidation ratios can lower the average power consumption per server. Consider group policy objects for PCs: In modern versions of Active Directory, Group Policy configuration can set the power plan by policy. This is located in Computer Configuration | Preferences | Control Panel Settings | Power Options. There, power plans for Windows XP and later systems can be set for computer accounts and delivered without risk of user tampering. Be sure to see Katherine Murray’s tips on power saving strategies for PCs. Ditch the KVM and monitor in the datacenter: I’ve started to think for a while that we are beyond the KVM (consolidated keyboard, video, mouse controller) and monitor, even if shared for a large number of systems in the datacenter. I’m much more in favor of leveraging hardware controllers such as the HP iLO or Dell DRAC. Should there be systems without those controllers you may want to consider creating a “crash cart” that has a small LCD screen, keyboard, mouse, tools and other miscellaneous handy things. As a side note, if you are considering a new server purchase and are on the fence about the extra cost for the iLO or DRAC, I recommend you get it and take the time to get familiar with these tools if you are not already. Idle any excess capacity: Frequently, network switches may be over provisioned in terms of ports for the entire datacenter. Considering that virtualization ideally reduces the overall port consumption requirements, it may be worth a re-cabling party to consolidate remaining switch ports to active switches and turning off (but not necessarily decommissioning) any switches with no used ports. Consolidate UPS battery units: Bottom of rack UPS units are hard to manage, especially if all of the batteries in the facility are on separate battery replacement cycles. During the next procurement cycle or battery replacement initiative, it may be time to put in smaller units to reflect actual consumption rather than having a larger battery remain charged and consuming facility power for a rack that will never be more than 30% full in terms of servers. Consider blade servers: If a large batch of servers are up for replacement, would blade servers do the trick? They may require a special power supply (three-phase or 30 amp interface); but power consumption per server may be lower than a typical replacement. Another option is deploying mini-blade servers, which can save space and possibly reduce power. Remove any unused PDUs: Like UPS units and KVMs, PDUs (power distribution units) will be consuming facility power even if there are no servers or computing devices connected to them. Again, consolidation of these devices may be a good power consolidation strategy. Consolidate racks: If virtualization or new battery units are not an option, it may be high-time to move from six racks that are 30 percent full to two racks that are fully populated. This can make all of the components in the rack (PDU, KVM, UPS, etc.) fully utilized as well as the space of the rack.

What power-saving tricks have you employed in your datacenter or for your PCs? Share your comments below.

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

8 comments
DWRandolph
DWRandolph

A little late to this thread, but have to put in my 2 cents for KVMs. I am used to HP Proliants, we use 1U LCD keyboard/screen models with a KVM switch mounted 0U behind it. The LCD goes to sleep when you close it and push it back into the rack. The KVM switch is on the network and gives me an alternate path even if the ILO is hung. If you are thinking of the power draw of an older CRT; yeah, get rid of those dinosaurs! But no fair comparing that to a modern integrated keyboard / trackpad / LCD unit. Our current rack layout only needs one KVM switch/LCD every 5 racks. Cabling is not a problem since it uses RJ45 to the server chassis. To me the very name "crash cart" implies things have failed, they clog up the aisles, not remotely accessible, and you have to deal with cabling every time you roll it to the next machine. My worst experience was when called in to help a site that used carts; Fine when only one person needed to work on one machine for planned maintenance. But when a network issue required 10 of us trying to each recover several machines at a time, the mess between racks became rather dangerous. Not to mention machines that would only recognize keyboard/mouse if it were plugged in at power on. Network was not letting us get to the OS or the DRAC to issue a clean shutdown/restart, the machine would not see the cart; had to plug in the keyboard and then push buttons for a hard power off/on! Luckily things were already so hosed that there was no application activity to leave corrupted disk structures behind during that. If they had KVMs every few racks, with the KVM on a different subnet, much of that Charlie Foxtrot could have been handled from our desks instead of running around the computer room.

b4real
b4real

Thank you for sharing them.

ccie5000
ccie5000

Other things to look for: - incorrect row directionality - cascading equipment exhausts - recirculation within cabinets - blow-by at the rack bottom - uneven air flow - hot spots - air short-circuiting - leaky raised floors - hot and cold air mixing - humidity instability - improperly regulated reheat circuits - limited capacities - improper set-points - comfort cooling vs. process cooling The guy who used to be Google's data center architect started Precision Air and Energy Services. (Disclosure: I'm not affiliated with PA&E, but the founder is a former coworker, friend, and IMO genius at HVAC.) In a typical data center they can reduce energy costs 30-55% via precise air flow control and variable water/glycol flow. They also train your staff to keep everything in balance after they're gone.

PVBenn
PVBenn

I've done Blades and virtualized a lot of low volume servers but the biggest bang for my dollar comes from replacing boot Drives on the blades to SSD's. From a power consumption point of view I'm saving 99W per blade . The servers use two drives in a raid one array to boot but they draw 50W each the SSD's we replaced them with use only 0.5W each. All heavily used log volumes are on the SAN with the rest o fthe DATA and everything been running fine for the past year. I'm now planning to do that with the rest of my server farm as they retire. Now If I could only get SSD's for the SAN itself I could really cut power consumption on both spinning disks and cooling. The other thing I noticed as did my data center manager was a few years ago some Intel CPU's ran really hot and consumed tonnes of power. Retire those servers quickly if you can by today's standards they're slow and hot.

Alpha_Dog
Alpha_Dog

Server rooms of major corporations and agencies do not account for the majority of frivolous IT consumption. Being an independent consultant we see a lot of waste, both in the form of money and power. Small business folks have no clue what then need, and often this is taken advantage of by sales weenies, including consultants who are more sales oriented than focused on their customer's needs. We have a client who bought eight big servers to run basic internal DNS, PDC, mail and web services from a Dell reseller. We revamped his network with two and sold the remainder. He was also sold a color laser printer, even though he prints less than two pages a day on average. Forget the money he wasted, do the math and see how much power the excess gear drew over the years he had them running 24/7. For his minimal use, he needed one minimal server, which could have run an Intel Atom processor, and perhaps one single CPU Zeon server. The printer should have been a PDF writer for that limited printing. The money he saved should have gone to replace the CRTs around the office. The take away is that sales oriented consultants will try to make as much commission as possible, giving the customer what they can afford rather than what they need, paying no attention to the power consumption. The reason I bring this up is that this pattern is likely repeated in small businesses across the nation. The multiplicative effect of this pattern repeated makes a low estimate of the small business problem eclipse the savings of tweaking a fairly well run datacenter.

fhrivers
fhrivers

Why not raise the temperature in the server room to around 80 F? According to studies done by Google and IBM, there's a sweet spot between reliability of your devices and energy consumption. Why does a server room need to be at 65 degrees?

fionacampbell
fionacampbell

Sadly native Group Policy support for power management is very hit and miss. The feature was added in Vista but cannot be used to control Windows XP power plans. GPPs can be used to provide power preferences to users but in many cases users can change these. All implementations have a number of common flaws. They only allow a sleep/hibernate to be configured and don't permit shutdown or logout as a power saving action. Similarly they don't offer the ability to have a different power plan at different times of the day, when nobody is logged on, anti-insomnia for rouge applications or schedule a shutdown at a specific time. There is a thriving industry in third-party power management software. The better products offer all of the above configurablity plus reporting. This last feature is critical to measure performance and spot problems. Data Synergy PowerMAN tool (www.datasynergy.co.uk) is a popular suite that gives IT staff all the tools they need to quickly and effectively deliver enterprise wide power management. The software is used by leading organisations including numerous universities, call center operatings, The US Dept of Energy and The US Dept of Agriculture.

fhrivers
fhrivers

You will have to upgrade.

Editor's Picks