Can the data center (DC) sector achieve 300% compute capacity growth while using 60% less space to achieve this result? Yes. Thats the assertion made by Gartner Group a year ago. How will it happen? Not through any single bolt of lightning but through myriad changes that either conserve energy or reduce thermal impacts.

So, some of this will be achieved with obvious moves, such as incorporating row- and rack-based cooling into all new data centers and retrofitting old ones with these measures. Adopting a modular approach to DC expansion, too, will become a viable option with engineers equipping these facilities with new ways to partition off unused floorspace from HVAC needs and banking that space for future consumption.

The biggest improvements in these areas, however, will come from virtualization software that achieves significantly higher user density per physical CPU or HD spindle. Software is unaffected by the laws of physics (well, far less so than hardware). The current generation of hypervisors and data center management software exerts a considerable drag on compute operations – perhaps as great as 10%.

This drag occurs not only in density but in latency. For example, spinning up a new VM on clouds powered by some of the most popular hypervisor technologies can endure from 10 minutes to over an hour. During this period, hardware is reserved but essentially idling. Just as Bill Gates and Steve Jobs wanted instant-on for PCs, creating hypervisors that can spin up new virtual machines in seconds will go a long way towards achieving the 300% efficiency improvement. Likewise, virtual data center orchestration layers must have the ability to push more of the compute load out to the ever-growing cloud of connected devices that increasingly have powerful processors which can relieve server-side tasks and minimize the need for big iron in the DC.

On the hardware side, we have actually seen very little innovation in heat dispersion technologies for servers beyond improved air flows. Some of this is because internal components for servers are particularly hard to justify on an R&D basis because they must be sold so cheaply. However, as cost pressures on DC operators build, premiums for smaller percentage improvements in thermal output or power consumption will become higher. Newer phase changing materials or liquids that can be used to capture heat may be applied to high-capacity servers. Similarly, new options for disk storage (MRAM, for example) are already road mapped by innovators and are poised to break into the market. These will go a long way towards allowing for higher user density per piece of hardware.

Then there is the arena of HVAC and lighting controls. Few realize that improperly controlled lighting can increase energy consumption by HVAC systems by 20% or more. So by deploying highly tunable LED lighting systems that have an advantageous lumens/watt ratio, DC operators might be able to reduce power consumption for both lighting and HVAC tremendously. At present, very few DCs have fine-grained lighting and HVAC control systems with zone, time programming, and occupancy sensing capabilities that are state-of-the-art.

Some of the moves, however, could be a bit more radical. The easiest way to cool a space is to build in an environment that is naturally cooled. So I would expect the exploration of data centers dug into the sides of hills or dug into the ground to tap into the geothermal cooling capabilities of the planet. An excellent example is Google’s decision to open a 200 million euro server hall in Hamina, Finland. Google was attracted to Finland’s cold climate and low electricity prices. Similarly, hot air tends to float up so expanding data centers vertically to take greater advantage of thermal updrafts could hold some promise. Equally intriguing are technologies that tap the sun or nearby cool water sources to reduce air conditioning costs. Using solar thermal power to recondense cooling fluids can cut costs considerably for data centers in warmer climes such as Las Vegas. Likewise, DCs located in colder places near water – a very common location – can use newer techniques bringing naturally frigid water through radiant cooling systems. Both of these techniques incur massive upfront costs but, past payback point, can be incredibly effective.

The bottom line is this. We can achieve the 300% / 60% target within the next five years if we execute a multi-pronged strategy that both optimizes DCs for existing efficiency modalities and improves expensive economization options to make them more accessible. Most importantly, these innovations must bring to bear entirely new innovations around software, hardware, and HVAC that can both dramatically increase density and dramatically decrease energy expenditures per compute cycle. The lowest hanging fruit will be the software and improving virtualization beyond the current model towards a future where shared resource pools make it easier to achieve even higher rates of hardware utilization.

Author Lisa Petrucci is the Vice President of Global Marketing at Joyent. She has worked in senior management roles at SixApart, IBM and numerous other technology companies.