Data centers were historically the pride and joy of many IT organizations. Huge IT budgets seemed much more palatable as executives were guided through rows of server racks and regaled with tales of technical triumphs, over the whine of thousands of fans. After several years of economic near-calamity, massive data centers seem more like an albatross around the neck of IT, with huge power bills, expensive maintenance and, after years of budget-driven neglect, large potential upgrade costs. In this environment, here are some of the relevant trends as you ponder the direction of the data center.

Trend 1: The non-existent data center

Cloud computing has obviously taken the IT world by storm and, with data centers in particular, promises to rid IT departments of even owning and maintaining a data center. In this utopian world, the cloud securely houses all of our computing power and storage, and instantly scales to meet business demand while internal IT focuses on other problems. All of this is delivered in a hardware- and application-agnostic environment at commodity pricing, and service and support are provided by unicorns and leprechauns.

Obviously cloud has not evolved to this point, both in terms of technology and vendor capability. Bandwidth and security remain major concerns, and both create insurmountable barriers to moving certain services offsite. On the vendor front, even major cloud players struggle to offer the utility model they promise, and many have simply rebranded their hosted model as “cloud.” If you’re forced to design the environment down to selecting CPU and OS patch level, and manually provision and allocate discrete hardware, much of the benefit of cloud is lost.

While we have yet to reach cloud nirvana, there are applications that are fairly obvious cloud candidates. Migrating these apps and diligently reallocating or disposing of the freed IT resources can diminish the size and cost of your existing data center, and in the case of smaller entities eliminate it altogether.

Trend 2: Rethinking the rack

Traditional, rackable x86 servers have been with us for a couple of decades, and the size and layout of the hardware have essentially become global standards. While this has worked well, cooling and maintainability have become growing concerns. While one wouldn’t necessarily think of Facebook as a driver of data center innovation, the company maintains several massive data centers to support its social networking platform, and launched the Open Compute Project two years ago to rethink how data centers are designed, down to the layout and architecture of the server.

The entire design is centered around lower cost, with superfluous components like the physical case removed and a layout built to optimize cooling and power efficiency, further reducing cost. While a turn away from major vendors might be too daunting for many companies, the Open Compute Project is bound to influence traditional vendors and is worth investigating if you’re considering expanding your data center.

Trend 3: Refocusing

Read other articles about data center trends and you’ll hear about hybrid clouds, private clouds, and all manner of “models” that do little to clarify what’s really a refocusing on applications. Previously, most of the time spent designing data centers used application requirements as a means to a technical end. A data-centric application would quickly lead to detailed designs of data networks and storage arrays. Using a traditional manufacturing analogy, the item being built would immediately trigger discussions on tooling, machinery, and factory capacity. Now, most product companies approach product design from a marketing perspective and rely on a combination of internal and external partners to figure out the nuances of building the end product.

Data centers are shifting focus in a similar manner, where data center design is no longer about building or buying IT “tooling,” but about providing scalable capacity that can meet a business need. Rather than worrying about whether you need a “Hybrid Private Cloud” or “Vendor-Managed Internal Cloud,” focus on determining the capabilities you need-determining which are most subject to fluctuate based on business demand, and which are best maintained in-house as a core competency. Where technically and financially feasible, have someone else run the underlying infrastructure. At the end of the day, business users and IT buyers want to buy access to an IT-driven service, not infrastructure.

For more on the 21st century data center, see ZDNet’s special feature page, or download TechRepublic’s Executive Guide to the 21st Century Data Center.