Data center outlook: Three critical areas of strategic focus

Featured Content

This article is courtesy of TechRepublic Premium. For more content like this, as well as a full library of ebooks and whitepapers, sign up for Premium today. Read more about it here.

Join Today

Data center spending is down, as companies build on the foundations they've laid over the past few years. Here's a look at the concerns and priorities that IT leaders are focused on right now.

As we reach the midpoint of 2014, data center managers seem to be catching their breath, building on work they started over the past few years. While this absorption and consolidation of work occurs, many data center managers are spending less. A 2013 study conducted by TheInfoPro (an arm of 451 Research) bears this out. In a survey of 180 server and virtualization professionals, 46% of respondents said they were planning to maintain (but not increase) their 2013 budgets on servers and virtualization, while 26% said they would actually spend less.

But even though spending has flattened out, organizations are busy developing their data center strategies and deciding how best to use the latest technologies. Let's look at where the data center action is headed.

1: Solidification of cloud architecture and strategies

Virtualization and servers have been the underpinnings of cloud, as enterprises have advanced cloud initiatives over the past five years. Now, most organizations have the necessary virtualization, gear, and expertise in place to participate in a wide array of cloud deployment models -- whether it is private cloud that the enterprise itself hosts; public cloud, which the enterprise subscribes to, depending on third parties to deliver IT capability; or hybrid cloud, which combines both private and public cloud capabilities.

Companies are evaluating and architecting long-range policies and practices that the enterprise and corporate IT will institute for cloud. Decisions will focus on these objectives:

  • 365/24/7 reliability in the cloud
  • Rapid access to cloud capability for users
  • Data protection
  • Safekeeping and control
  • Reduced data center costs
  • Minimized risk

Enjoying this article?

Download this article and thousands of whitepapers and ebooks from our Premium library. Enjoy expert IT analyst briefings and access to the top IT professionals, all in an ad-free experience.

Join Premium Today

All these factors will be active meeting topics and will ultimately coalesce into a "best of class" cloud architecture that is likely to be hybrid for many enterprises, highlighted by sets of criteria that define which applications are best suited for private and public cloud based upon their functions, security, access, and data requirements.

2: Software orchestration of the data center

Rapidly repurposing data center resources for fluctuating business needs is going to depend on software, automation, and built-in best practices intelligence. It's simply too time-consuming to effect configuration changes at the hardware level of the data center when changes need to happen.

Virtualization has and will continue to be a major driver of software-directed resource allocation and re-allocation because it can work across applications and virtual operating systems on a single server and without IT operator intervention.

In the storage area, solid state disks (SSDs), cache, and in-memory technologies will use prepackaged storage management rule sets from vendors. These automated management techniques will place frequently accessed data in-memory or in cache and relegate seldom-accessed data to slower, less expensive storage devices like hard drives (HDDs). Small and medium-size businesses (SMBs) are likely to adopt this automation as-is, because it is superior to any storage utilization or data management practices that they presently have. Enterprises are likely to use the automation as a base, and then customize it to their more specific business requirements.

As a software-driven architecture spreads itself further in the data center, the next frontier will be software-defined networks (SDNs), which enable network administrators to manage network services and traffic flows through software, instead of having to get into more detailed control settings that require further drilldown into software, and even firmware and hardware. The goal of SDN is to automate these underlying areas of the network so that administrators can implement their decisions and then let the software automation take over.

However, the real challenge is determining what types of software will be most capable of giving data center managers an overall ability to manage their data centers and their application performance and service levels through software. End-to-end visibility and manageability of applications and their performance are most relevant to the business. This is also an area where SDN presents challenges.

"SDN, which essentially virtualizes the network, will convert the network into a big and abstracted fabric that is highly dynamic," said Mark Burns, director of product management at Compuware, a technology solutions company. "SDN 'flattens' the network with the abstraction that it brings. When traffic moves from one network to another, the abstraction of SDN will make it difficult for network engineers to know where an application is flowing."

Another catch with SDN is its dependence on hardware at remote endpoints of the network. Although SDN is centrally managed through controllers that appear as single switches, each physical location that the network routes to requires its own physical switch, and the switch must be operated by a field technician at the site. For global operations like the armed forces, moving IT into distant command posts instead of being able to remotely address network problems is not being well received.

A more realistic focus for data center managers seeking software-driven control of their data centers is data center infrastructure management (DCIM), which has years of research and development behind it. DCIM is already operational in many enterprise data centers and has continued to make strides toward end-to-end visibility of networks, application workflows and management, and centralized resource management in data centers.

DCIM's objective is to provide a single point of control and a "single version of the truth" of data center performance via one toolset for data center professionals. Normalizing data from network and application performance across diverse IT disciplines like databases, networks, applications, and systems reduces time for the discovery, isolation, diagnosis, and resolution of performance issues because everyone is working with the same set of information. Just as significantly, DCIM seems to be the most viable path toward achieving end-to-end visibility of data center operations across networks and applications, something every enterprise wants.

3: Risk management and data center resiliency

"Everywhere, customers are asking about risk and resiliency in their supply chains," said Chris O'Brien, senior vice president at C.H. Robinson, a third-party logistics provider (3PL) in the supply chain vertical.

A major part of the effort comes down to the information and data centers that run these supply chains. So it's no surprise that more enterprises are using predictive analytics to determine where natural and other disasters are most likely to occur -- and in many cases, either building or co-locating data centers in various regions of the world with the ability to facilitate failover to these data centers if a major outage occurs.

There are many models for distributing multiple data centers, but these appear to be the most predominant:

  • If the enterprise is primarily serving a metropolitan area, the strategy is to operate two data centers within or near the metropolitan region that can fail over to each other in real-time and a third data center that is outside of the region entirely -- with a more protracted recovery and failover time. The third data center is usually a co-location site and occasionally a cloud service.
  • If the enterprise is multinational, the strategy is to operate three or more data centers throughout the world, all in different geographical areas, and in most cases, with full IT staff and failover capability at each data center. Co-location is used to save costs in building multiple data centers. If the company is an SMB, the cloud might also be considered as an outsource option for data center services.
  • In some cases, enterprises are migrating their data center production between data centers on a quarterly basis to ensure that every data center can run production at any time.

"Resiliency is so important now because organizations over the past few years have witnessed very visible disruptions and the repercussions of these disruptions," O'Brien said. "They now know that risk is no longer isolated and how impactive it can be."

Recap

In the months ahead, data center managers will be consolidating what they've learned and implemented over the past few years so they can forge mature data center policies, operations, and future strategies.

Within these strategies, there are sustainability and energy savings initiatives, facility management initiatives, and governance initiatives that will be considered alongside the major decisions that must be made about a formal cloud architecture and methodology that meets future needs. Organizations are also evaluating the role of software-driven automation in the data center and determining which new and multifaceted risks they should manage and prepare for.

From a budgetary standpoint, this might seem like a "slow" year in the data center. But in data centers themselves, the action in areas of strategy and implementation are anything but slow.


Join Premium Today