CXO

The virtualization bubble bursts: Enterprise spend shifts to software-defined data centers

451 Research's TheInfoPro survey reports its enterprise respondents plan to level off their spending on virtualization servers. Find out what this might mean for cloud foundation deployments.

 

virtualserversrack_1676x1146_021714.jpg
Image: iStock/welcomia
 

TheInfoPro service of 451 Research has released its latest servers and virtualization study, predicting that spend on infrastructure will plateau over the next two years as server pros and IT decision makers pay more attention to software-defined data centers. The most popular technology for adoption will be cloud platforms, followed by management and automation tools necessary for virtualized data centers.

The survey also found that, with a general slowdown in infrastructure technology spending, respondents plan to spend considerably less on x86 rack servers. Integrated infrastructure solutions are gaining more acceptance. Despite the appeal of micro-server technology, only 5 percent of respondents plan a future deployment. Lastly, opposite the situation with infrastructure technologies, server professionals are planning to spend more on the software needed to run cloud-ready data centers.

To discuss the servers and virtualization report, TechRepublic recently had a telephone briefing and email exchange with Peter ffoulkes, Research Director, Servers and Virtualization, Cloud Computing at TheInfoPro service of 451 Research.

Key takeaways from the interview:

  • A cloud-ready data center has three evolutionary stages: agile, automated, and adaptable.
  • Consolidation and standardization of server architectures are shifting balance to blade servers from x86 rack servers.
  • The gap between VMware and Microsoft in the hypervisor technology market is closing. VMware is still the acknowledged leader.
  • Software-defined data centers: Reasonable to assume more data center control will be implemented in software layer rather than in hardware.
  • VMware will need to be successful in the virtualized data center, private cloud, and hybrid cloud markets, and will face stronger competition in those areas.
  • Only 2 percent of respondents are planning public cloud projects due to technology readiness and compliance and regulation issues.
  • Converged infrastructure and solid-state disk inside servers are viewed as important hardware components of future data center architectures.
  • Micro-servers: most enterprises are taking a wait-and-see attitude to this new technology.

Peter ffoulkes: A cloud-ready data center has three basic evolutionary stages that can be characterized as agile, automated, and adaptable. A cloud-ready data center could be termed 'Triple-A rated,' and very few meet that standard today.

The agile stage implies that the majority of the data center's compute resources are composed of a consolidated, standardized, and virtualized malleable pool of resources that can be provisioned and re-provisioned at will. This can be achieved by manual provisioning, sometimes described as 'pushing the button until the cloud fills up.'

The automated stage provides enhanced scalability, exact repeatability, mitigates human error, and can improve security and access management.

The adaptable stage employs policy-based governance that uses a rules-based system to manage service levels and potentially to respond automatically to changes in demand and business requirements as they fluctuate over time.

TechRepublic: The survey results show that spend on x86 rack servers is decreasing. Is this technology becoming outdated?

Peter ffoulkes: It isn't really an issue of technology becoming outdated, since rack servers and servers are both typically designed and built with the latest technology components. It is more an issue of design, serviceability, and a match to the needs of modern data center philosophy. Rack servers are essentially self-contained individual entities with their own power supplies, cooling, and so on. This provides greater flexibility, but also reduces additional and sometimes unnecessary duplication of components. Blade servers rely upon a blade enclosure where the power distribution, cooling, and networking technologies are concentrated, which many view as a more efficient architecture. The trend towards consolidation and standardization of server architectures is tilting the balance in favor of blade architectures from a design center perspective, not from a technology obsolescence perspective.

TechRepublic: VMware and Microsoft fared well for management and automation of virtual data centers. What are their respective strengths in this software area?

Peter ffoulkes: Our surveys indicate that over 80 percent of the compute capacity in a typical cloud-ready data center is based on the x86 architecture. The majority of these systems are usually virtualized, meaning that a hypervisor sits between the hardware and operating systems that run application workloads. Most of our respondents indicate a virtualization goal of having between 85 percent and 95 percent of their x86 systems virtualized. These systems typically run some combination of workloads based on either Microsoft Windows or a Linux operating system, frequently Red Hat Enterprise Linux.

VMware can make reasonable claims to have pioneered virtualization in the x86 server market and to be the established leader from a market share perspective. For a long while, VMware has been widely acknowledged as the technology leader for hypervisor-related technologies in comparison with other hypervisor offerings. However, the market is maturing and the gap is closing, especially for basic technology. Microsoft is the most widely cited alternative to VMware for commercially supported hypervisors (in comparison to open source offerings), and with the Hyper-V implementation in Windows Server 2012 it is widely regarded to be ready for enterprise deployment as an alternative to VMware's products.

From a technical standpoint there is a significant level of parity between the two vendors at this point, and acknowledgment that even where VMware has some remaining superiority, it is in areas that are not considered essential. As such, commercial considerations, such as installed based and existing investment, cost and cost of change, and flexibility with regards to both future costs and support for heterogeneous environments, are likely to be the differentiating factors as both vendors vie for position.

TechRepublic: VMware claims to have introduced the software-defined data center last year. To what extent do you agree? What challenges is VMware facing in the near future?

Peter ffoulkes: From a marketing perspective VMware can certainly lay claim to popularizing the term 'Software-Defined Data Center,' and it is an appropriate way to describe how the majority of enterprise data centers will be architected for the foreseeable future. The virtualization of servers, storage, and networks will certainly shift the balance of the strategic importance of data center technologies, but it would be a mistake to conclude that the hardware elements of a data center will no longer be important, or to assume that all hardware will become interchangeable, pure commodity components available from any white box type vendor. It is reasonable to assume that an increasing percentage of the critical operational and control functions of a data center will be implemented in the software layer rather than hardware or firmware technology components.

As the vision of 'the software-defined data center' plays out, VMware will need to compete on a more level playing field than it has historically done, where it has been widely regarded as offering good functionality within the confines of a 'VMware-centric world.' The data center of the future may well be software defined, but it is still likely to be a heterogeneous environment with multiple vendors large and small vying for position. To continue its growth, VMware will need to be successful in the virtualized data center market, the private cloud market, and the hybrid cloud market, which also implies a public cloud presence. While VMware is adjusting its approach to address these issues, it will face much stronger and better equipped competition than it has encountered up to this stage of data center evolution.

TechRepublic: Why in your view are only 2 percent of respondents planning public cloud projects?

Peter ffoulkes: This is simply a readiness issue. The majority of organizations are still embroiled in server virtualization and data center automation and orchestration initiatives. Building out private cloud architectures and qualifying workloads in those environments, which is a precursor for many public cloud deployments, still lies in the future for most. Beyond technology readiness, there are a very large number of regulatory, compliance, and legal jurisdiction issues to be resolved before moving mission critical workloads and data into public cloud environments for large enterprises.

TechRepublic: Software leads hardware in your data center technology Heat Index, but converged infrastructure and solid-state disk inside servers are in the top 10 of the Index. What accounts for this?

Peter ffoulkes: While we are seeing a shift in mindset to the software technologies required to build and orchestrate a software-defined data center, the hardware layer is still a critical foundation and technologies, such as solid state disk and converged infrastructure, are viewed as important aspects of future data center architectures. With the core foundation in place, the software tools to manage and orchestrate a software-defined data center can be layered on top.

TechRepublic: Why are micro-servers not making much traction among the large and midsize enterprises that you surveyed?

Peter ffoulkes: Micro-servers are a fairly new approach to architecting servers that claim to be more energy and space efficient than the current generation of converged infrastructure offerings. While these architectures hold much future promise, the currently available offerings do not yet offer the flexibility of workload capability offered by more traditional designs. The vast majority of our respondents are still implementing the current generation of converged infrastructure designs and have not yet had the time, resources, or incentive to undertake a serious evaluation of these new products and their applicability or advantages in their specific environments. Survey respondents are beginning to become aware of micro-servers, but most are taking a wait-and-see approach for now.

 

About Brian Taylor

Brian Taylor is a contributing writer for TechRepublic. He covers the tech trends, solutions, risks, and research that IT leaders need to know about, from startups to the enterprise. Technology is creating a new world, and he loves to report on it.

Editor's Picks

Free Newsletters, In your Inbox