Virtualization has transformed data centers and IT practices, but there is room for improvement. Below are ten common mistakes companies make that you should avoid in your virtualization strategy.
1. Lack of a clear de-provisioning process
Organizations are measured on how quickly they provision and deploy virtual systems, but few pay attention to what happens at the end of a virtualized system’s life cycle. This is important, as many virtual systems are provisioned to meet temporary IT needs, like testing a new application. Unfortunately, when the need for a virtual system ceases yet no one de-provisions it, the system begins to consume space that could be used elsewhere. Eventually, this practice graduates into full-blown virtual server sprawl that is just as damaging to data center efficiencies as physical resource sprawl.
2. Not optimizing virtual systems
Because virtual systems share a common pool of resources and those resources are not needed by every single system at the same time, there is an opportunity to size virtual systems smaller than they would be sized on a dedicated physical server. However, the tendency in IT is to simply take these systems as they are sized for the original physical server environment and redeploy them unchanged in the virtual environment. As a result, these systems consume many more resources than they need to and you lose some of the economy that you originally gained with virtualization.
3. Failure to deploy virtualized systems where they are best suited
The gut instinct with virtualization is that you achieve best results by placing them on x86 server hosts—but many sites have realized even greater economy of scale by moving virtual Linux or Windows systems to a mainframe! If a mainframe is part of your data center, it should also be part of your virtualization planning.
4. Not breaking down IT silos
You maximize your virtualization in the data center by leaving all hosting options open for virtual systems—but if your host choices can range from x86 servers to mainframes, your IT staff must work together. In many data centers, staff cooperation is a bigger challenge than system integration. This is because mainframe and distributed x86 computing groups have often worked independently of each other in the past. It’s difficult to break down these customary work “silos” after so many years of segregating systems and functions—but if you’re going to orchestrate an end to end virtualization strategy that fully exploits all of your hosting options, these groups must work together.
5. Lack of a strategic vision for virtualization beyond the data center
Many organizations focus their efforts on shrinking data center footprints and eliminating servers so they can realize immediate cost benefits from virtualization—but the effort shouldn’t stop there. Long term, do you plan to use virtualization only within your own data center—or will you peel off certain applications that you will run with an IaaS (infrastructure as service) provider in the cloud? Managers responsible for virtualization should consider the end to end scope of everything they are virtualizing—whether virtualization occurs within the data center or through outsourcing. If they don’t consider the virtual architecture in its entirety, they will find it difficult to accurately assess total cost and application performance.
6. Manual provisioning scripts that introduce errors and threaten operating systems support agreements
A majority of sites create and then reuse manual scripts for virtual system provisioning, modifying the scripts as needed for their particular IT environments. This reduces work because programmers can use “skeleton” scripts that only require a few modifications. Unfortunately, manual modifications to scripts can also introduce errors. Worse yet, if a virtual operating system gets over-modified, risk is introduced that the new system has drifted so far from the original vendor distribution of the operating system that the vendor will refuse to support it. The solution is using automated scripting for system provisioning that both checks for errors and ensures that the resulting virtual OS remains compatible with the vendor’s version. Sites moving from manual scripting to automated script generation also report productivity gains.
7. Lack of ROI (return on investment) follow-through
The most active ROI monitoring for virtualization occurs after the first round of funding and installation of virtual solutions. One reason is that it’s relatively easy to show substantial initial gains in equipment and energy cost savings as you wheel servers out of the data center and reduce data center square footage. However, as virtual server sprawl grows, some of these initial gains are lost. IT should have a long- term and continuous way of monitoring its ROI from virtualization so it doesn’t lose the gains it initially achieved.
8. Inflexible practices from application developers and vendors
Both application developers and software vendors are accustomed to having their own dedicated physical servers. Developers think of their servers as personal work resources, and third-party application providers always try to sell a turnkey solution that includes a dedicated physical server for their software. These proprietary tendencies created impasses to virtualization.
9. Forgetting to include virtual assets in your asset management
Software and data center practices for asset management tend to focus on physical resources, but there should also be lifecycle management of virtual resources.
10. Failure to understand the limits of virtualization
Virtualization isn’t a viable solution for every system. In some cases, a system requires a dedicated physical server (or even a cluster of servers). A great example is high performance computing (HPC) used for big data analytics. These servers must parallel process data, and do not work well in a virtual deployment.