Data Centers

Stop trying to be Facebook and Google, there are easier ways to cut datacentre costs

While companies worldwide are stepping up investment in their datacentre estate they are chasing ever slimmer cost savings, while more straightforward wins are being ignored.

Companies trying to cut datacentre running costs are too focused on squeezing efficiencies from cooling and other infrastructure when they would be better off pruning underused servers.

Despite the majority of companies increasing spending on datacentres over the past year the average efficiency of power distribution and cooling in facilities worldwide fell slightly, according to the 2014 Data Center Industry Survey by the Uptime Institute.

While 62 percent of firms increased DC budgets the average PUE rating for centres worsened, increasing to 1.7. PUE is a measure reflecting what proportion of the energy used by a centre powers the servers rather than associated DC infrastructure. In recent years major tech companies like Facebook and Google have been edging closer to the ideal of achieving a PUE of 1.0, using methods such as hot air containment, water side economisation, fresh air cooling and extensive monitoring.

datacentrepue.png
 Image: Uptime Institute
Matt Stansberry, director of content and publications with the Uptime Institute, explained why the majority of companies are having difficulty getting PUE down to the levels achieved by the web giants.

"They're hitting diminishing returns because they've done so much of the work already. There's only so much you can do beyond PUE 1.5 without significant investment, renovation and major staff training.

"To get down below 1.5 requires a purpose-built, intensive design, so people are stopping about there. Shaving those last tenths of a point doesn't even buy you that much from a cost and energy-saving standpoint."

Facebook may be reporting a PUE of 1.08 at its datacentre in Prineville, Oregon but web giants like Facebook and Google dedicate huge amounts of money to cutting the costs of running their facilities. That level of investment pays off for these companies because datacentre running costs account for a large proportion of their outgoings.

Most enterprises don't have the skills or resources to emulate these firms and build custom datacentres, whose infrastructure and computer hardware and software has been stripped back to what is necessary to serve their core workloads.

The majority of firms, 80 percent, have improved cooling efficiency through hot/cold aisle containment, and increasing the temperatures they run servers at, but non-standard methods for controlling temperatures, such as evaporative or liquid cooling, were used by less than one fifth of firms.

For most enterprises a PUE rating of 1.5 is "probably as far as you're going to get" said Stansberry.

The problem is that PUE has started to be used as a performance target by managers who don't truly understand what they're asking, said Stansberry.

"We were at a very large financial organisation and the gentleman who runs the global portfolio of datacentres for them said 'My boss has read enough datacentre articles to be dangerous'.

"''He's setting PUE targets across all of our sites and he doesn't know what it's going to cost us to get there, he doesn't understand what the trade-offs are and that some of these sites can't get there'."

According to the survey a small portion of firms have even set their PUE targets ahead of what Facebook and Google achieve - chasing a sub-1.0 rating.

puetargets.png
Company target PUEs for their primary site
 Image: Uptime Institute
Managers are taking interest in datacentre energy costs in general, with executives receiving reports on costs at 70 percent of firms and setting cost targets at 46 percent of companies.

Take out the zombies

Rather than fixating on shaving PUE, many companies could reduce these costs more effectively by focusing on removing what Stansberry calls "comatose" servers from their estate.

"Big banks and other enterprise folks are finding their capacity planning is out of whack because the server hardware is becoming more efficient," he said.

"Since server hardware is doing a better job of consolidating workloads they don't need the space they thought they needed three years ago. A lot of these sites are overprovisioned. Most organisations are carrying around a huge burden of under-utilised or plain orphaned IT hardware that's just sitting there plugged in."

One fifth of the servers run by the average organisation are estimated to be under-utilised by the Uptime Institute, while companies estimate the proportion is far lower, below five percent.

However, the Uptime Institute's Stansberry said many companies are yet to carry out audits to gauge how many machines are under utilised.

"They're saying it's five percent, but half of them aren't doing any auditing so they have no idea."

The most common reason given for not auditing for under-utilised servers was "lack of management support".

The 2014 Data Center Industry Survey polled execs, as well as facilities and IT managers, at enterprises, government organisations, finance firms and organisations running co-location facilities.

About

Nick Heath is chief reporter for TechRepublic UK. He writes about the technology that IT-decision makers need to know about, and the latest happenings in the European tech scene.

4 comments
hlhowell
hlhowell

Editing time expierd while I was editing.  Please stop the timer while editing is in progress.

kvandiepen
kvandiepen

Open Source Storage makes it possible to maximize your usage, and add storage remotely, as needed for key events, ( ie, big sale in Ecommerce, Mother's day Facebook on Halloween) and expand without adding servers every time.

prh47
prh47

Going lean in the data center can be overdone, if that strips the center of its ability to respond to spikes in usage.

pgit
pgit

@prh47 Yep. And don't forget failover.

Editor's Picks