Data Centers

Ten common virtualization mistakes

Don't lose the benefits of virtualization by neglecting to plan carefully or properly maintain virtualized systems.

virtual_scott_iStock_000000127797XSmall.jpg
Virtualization has transformed data centers and IT practices, but there is room for improvement. Below are ten common mistakes companies make that you should avoid in your virtualization strategy.  

1. Lack of a clear de-provisioning process

Organizations are measured on how quickly they provision and deploy virtual systems, but few pay attention to what happens at the end of a virtualized system’s life cycle. This is important, as many virtual systems are provisioned to meet temporary IT needs, like testing a new application. Unfortunately, when the need for a virtual system ceases yet no one de-provisions it, the system begins to consume space that could be used elsewhere. Eventually, this practice graduates into full-blown virtual server sprawl that is just as damaging to data center efficiencies as physical resource sprawl.

2. Not optimizing virtual systems

Because virtual systems share a common pool of resources and those resources are not needed by every single system at the same time, there is an opportunity to size virtual systems smaller than they would be sized on a dedicated physical server. However, the tendency in IT is to simply take these systems as they are sized for the original physical server environment and redeploy them unchanged in the virtual environment. As a result, these systems consume many more resources than they need to and you lose some of the economy that you originally gained with virtualization.

3. Failure to deploy virtualized systems where they are best suited

The gut instinct with virtualization is that you achieve best results by placing them on x86 server hosts—but many sites have realized even greater economy of scale by moving virtual Linux or Windows systems to a mainframe! If a mainframe is part of your data center, it should also be part of your virtualization planning.

4. Not breaking down IT silos

You maximize your virtualization in the data center by leaving all hosting options open for virtual systems—but if your host choices can range from x86 servers to mainframes, your IT staff must work together. In many data centers, staff cooperation is a bigger challenge than system integration. This is because mainframe and distributed x86 computing groups have often worked independently of each other in the past. It’s difficult to break down these customary work “silos” after so many years of segregating systems and functions—but if you’re going to orchestrate an end to end virtualization strategy that fully exploits all of your hosting options, these groups must work together.

5. Lack of a strategic vision for virtualization beyond the data center

Many organizations focus their efforts on shrinking data center footprints and eliminating servers so they can realize immediate cost benefits from virtualization—but the effort shouldn’t stop there. Long term, do you plan to use virtualization only within your own data center—or will you peel off certain applications that you will run with an IaaS (infrastructure as service) provider in the cloud? Managers responsible for virtualization should consider the end to end scope of everything they are virtualizing—whether virtualization occurs within the data center or through outsourcing. If they don’t consider the virtual architecture in its entirety, they will find it difficult to accurately assess total cost and application performance.

6. Manual provisioning scripts that introduce errors and threaten operating systems support agreements

A majority of sites create and then reuse manual scripts for virtual system provisioning, modifying the scripts as needed for their particular IT environments. This reduces work because programmers can use “skeleton” scripts that only require a few modifications. Unfortunately, manual modifications to scripts can also introduce errors. Worse yet, if a virtual operating system gets over-modified, risk is introduced that the new system has drifted so far from the original vendor distribution of the operating system that the vendor will refuse to support it. The solution is using automated scripting for system provisioning that both checks for errors and ensures that the resulting virtual OS remains compatible with the vendor’s version. Sites moving from manual scripting to automated script generation also report productivity gains.

7. Lack of ROI (return on investment) follow-through

The most active ROI monitoring for virtualization occurs after the first round of funding and installation of virtual solutions. One reason is that it’s relatively easy to show substantial  initial gains in equipment and energy cost savings as you wheel servers out of the data center and reduce data center square footage. However, as virtual server sprawl grows, some of these initial gains are lost. IT should have a long- term and continuous way of monitoring its ROI from virtualization so it doesn’t lose the gains it initially achieved.

8. Inflexible practices from application developers and vendors

Both application developers and software vendors are accustomed to having their own dedicated physical servers. Developers think of their servers as personal work resources, and third-party application providers always try to sell a turnkey solution that includes a dedicated physical server for their software. These proprietary tendencies created impasses to virtualization.

9. Forgetting to include virtual assets in your asset management

Software and data center practices for asset management tend to focus on physical resources, but there should also be lifecycle management of virtual resources.  

10. Failure to understand the limits of virtualization

Virtualization isn’t a viable solution for every system. In some cases, a system requires a dedicated physical server (or even a cluster of servers). A great example is high performance computing (HPC) used for big data analytics. These servers must parallel process data, and do not work well in a virtual deployment.



About

Mary E. Shacklett is president of Transworld Data, a technology research and market development firm. Prior to founding the company, Mary was Senior Vice President of Marketing and Technology at TCCU, Inc., a financial services firm; Vice President o...

5 comments
fremonty
fremonty

Does anyone have additional details on #2?  I don't understand how a VM can run more efficiently than a physical server.  If I size a physical server with 4GB of ram because I know the OS likes 1GB and my service likes 3GB, how can a VM run with less than 4GB?  Or does #2 only apply to the processor itself?

timothy.retford
timothy.retford

I suppose my #11 for the list could be loosely associated with #9 above, but it's so critical that it ought to be a top 10 in its own right:  failing to take into account virtualization licensing issues.  All too often, people without an eye for licensing deploy things because it's technically possible, but without consideration for the potential risks they may be opening the organization up to. 

Example:  you want to set up an Oracle database on a virtual server?  Great... no problems, technically.  But from a licensing perspective, if you're using the wrong virtualization technology, which in Oracle terms means anything non-Oracle, you're going to be penalized by having to license the whole physical server rather than just the virtual server the database is set up on.  And no one will tell you this until the Oracle license compliance team comes along and charges full list price plus backed 2 years maintenance. Ouch!

Licensing and Software Asset Management is where IT meets the business in financial terms:  if you're misusing virtualization from a licensing perspective, you're opening the organization up to risks that could far exceed any savings you've realized in hardware.

Phil.A
Phil.A

Don't forget "using under-powered hardware" - I've met initial tries at virtualisation environments on 5 year old hardware with limited memory, and then trying to run each VM with 512MB or so, and only a few GB of hard drive space - surprisingly it ran horribly, and each environment kept producing issues, unsurprisingly

mark
mark

@fremonty 

On #2

Since resources can be added to a guest you can start off with less resources and then add more as the application demands more resources. (databse grows, more users etc. will require more resources. ) On a physical server you would size the hardware to support this future growth but may not use it for months or years (possibly never as many servers are underutilized for their whole life cycle.) 

In a virtual environment I provision guests with low  RAM and CPU and see what the impact of the  application is and adjust from there.  Often Application owners over request resources because they know on physical servers it is next to impossible to get more after an upgrade turns out not to be powerful enough. 

I hope this helps.