Today's IT environment is a mix of technologies that are clearly moving faster than the typical IT environment can handle. Virtualization expert Rick Vanover shares a few points for and against approaching virtualization broadly.
Frequently, I meet with people who are considering virtualizing servers in a number of different scenarios. These can be organizations that are just getting started with virtualization, which can be a big change in light of shrinking budgets and limited supporting infrastructure such as shared storage. Another situation is virtualizing the difficult servers. The low-hanging fruit was easy for early adopters to virtualize the datacenter, but there are plenty of more challenging systems left. This includes servers with a large amount of local storage, limited amounts of downtime or systems that are incredibly sensitive. The last situation is the organization that is on the brink of being 100% virtualized.
In all situations, the question comes up if everything should be virtualized. The short answer is of course, “It depends.” However, in reality only in a few situations is this really attainable. The enterprise, of course will have a number of systems that should not be virtualized for any number of reasons. This can include vendor support requirements or an alternative arrangement such as a clustering solution to provide additional reliability.
While virtual machines are all fine and dandy for just about every workload, physical servers are still “okay.” I get stuck coming up with virtualization solutions for very small environments. In those situations, I would gravitate to a free virtualization solution such as the free edition of VMware ESXi (vSphere Hypervisor) or Microsoft Hyper-V. But even then, it may be easier to simply go with a physical server for a site that only needs one or two servers.
It is definitely nice to tote around a statistic such as, “I am 100% virtualized.” But, most organizations need some sort of additional qualifier. In my practice, I usually quantify server virtualization inventory in terms of “this datacenter is 90% virtualized for all eligible workloads.” This means that systems that are not supported as a virtual machine by the application vendor or other justified reason are not counted in the calculations.
The issue that I run into most is that our gatekeeping procedures seem to be slipping. I’m totally fine with the process changing, which is the case with the ever-increasing capabilities of virtual machines. But, there are still today solid reasons on why servers should be installed natively on hardware, and we should ensure that this process is still followed.
Do you run into pressure to make everything virtual? What guidelines or boundaries do you put in place regarding virtual machine candidacy? Share your comments below.