Virtualization has probably affected every IT environment to one extent or the other. Whether you do casual virtualization for test environments or complete virtualization for all systems, there’s no one-size-fits-all solution. Here’s a rundown of the things you need to know about the whole virtualization space.
Note: This information is also available as a PDF download.
#1: Virtualization is more than just VMware
Sure VMWare is the current leader, but it has company in the server virtualization space as well as desktop virtualization. The newest player is Citrix XenServer. The XenServer Enterprise platform is quickly gaining features and management offerings rivaling those of VMware Virtual Infrastructure 3 (VI3), based on ESX 3.5 and Virtual Center 2.5. The Hyper-V virtualization hypervisor is also going to be a player when Windows Server 2008 is released. Hyper-V will provide a similar offering to VI3 from the Microsoft perspective. The Hyper-V virtualization platform on Windows will also offer some desktop virtualization options that supplement the server virtualization platform.
#2: Storage and networking will be your biggest pain points
Planning a server virtualization implementation of any scale will require a lot of planning in the areas of storage and networking. In a server virtualization strategy, the migration from local storage to shared centralized storage takes adequate sizing and planning. Further, administrators will be challenged to rethink the provisioning of virtual servers. For example, in using VMware ESX virtual server environments, the virtual hard disk size is allocated entirely when the virtual machine is created. Therefore, if a Windows virtual server has 50 GB assigned to the virtual hard drive in the virtual machine inventory yet uses only 15 GB on the virtual file system, the other 35 GB will be claimed by this system on the storage available to ESX.
For larger implementations, the virtualization administrator is not in charge of the storage. Many storage administrators will identify the base requirement, add a small amount (maybe 10% to 15%), and if more is needed later, add it as required. This is an inconvenient shift for most administrators but an efficient use of the storage on the central storage systems. Storage area network (SAN) systems, such as the IBM SAN Volume Controller and EMC ControlCenter SAN Manager, are expensive, and storage administrators are challenged to use these resources in the most efficient manner possible.
Networking virtual environments poses another set of issues. When considering a virtualized server environment, management strategies are adapted to reflect additional connectivity requirements, high availability, and virtual switching. Planning the adequate cabling requirements, virtual LAN (VLAN) assignments, and redundancy is a step that in my experience could always use another pass to ensure all connectivity requirements will be met in a redundant fashion.
#3: Don’t underestimate the value of the free tools
Free virtualization products, like VMware Server, Citrix XenServer Express, and Microsoft Virtual Server 2005, provide a great way to get exposure to virtualized environments for basic testing and performance benchmarking. Another popular technique is to use free tools for remote systems that can’t be run centrally. Having a single physical server with a free virtualization product running a small number of virtual machines is a solid strategy for situations where a robust virtualization solution would be impractical.
The free products generally lack the management tools that accompany the full enterprise suites; however, tools can be purchased to provide additional management options for the free products. For example, consider Virtual Center for VMware Server to manage the free virtualization engine.
#4: Management tools are key
Basic virtualization technology, in my opinion, is becoming a commodity that will eventually be more dependant on hardware resources than on virtualization hypervisor technology. The management tools will be the driving forces in virtualization technology decisions. The packages that offer the most options in storage and networking management, machine migration, high availability, and efficiency configuration options will be factors that decide what packages will be used.
#5: The operating system may go away
Virtualization platforms may not even have an operating system in the foreseeable future; in fact, this is already here. VMware’s ESX 3i offers the same functionality as the fully installed ESX 3 but within a 32-MB footprint. It will soon be available as an integrated option within server systems. This will reduce risks of the installed operating system providing any security issues and will channel all configuration of the host system to the management package.
#6: Virtual appliances rock!
Virtual appliances (VAs) make up a new space that has emerged as virtualization has become more popular. The VA model is simply a purpose-built virtual machine that provides a canned set of functionality from the start. VAs are available to provide DHCP roles, provide chargeback to virtual environments, act as Wiki servers for intranets, and to fulfill many other purposes. VMware’s Virtual Appliance Marketplace will have some company, as current VAs are adding support for Citrix XenServer and other virtualization platforms.
Many virtual machines are available for free with open source applications and free operating systems. The VA model can be a big aid in bringing specific functionality to your infrastructure without additional licensing or hardware costs. Many VAs also work on the free virtualization products, so you don’t tie up expensive hardware resources on your enterprise virtualization system, should you wish to conserve availability.
#7: Virtualization can benefit the desktop
Do you have a large number of like-configured desktops? If so, you may want to consider a desktop virtualization solution. These solutions allow administrators to have a new level of granular control of the installed inventory, permitted hardware accessibility, and network connectivity. Desktop virtualization also makes reversion back to the base image a snap. No longer will a trip up to re-image and re-personalize a system be required.
Some of the desktop virtualization packages also manage storage very efficiently. Imagine providing a virtual desktop to 1,000 computers, but instead of hosting an image of the base install for all of those computers, the virtualization package manages only the change in storage. For most situations, that will be simply the profile and current usage data. And in this situation, the backend storage requirement for 1,000 virtualized desktops is very small, considering the number of systems being hosted.
#8: Take advantage of application virtualization
Application virtualization isn’t new to you if you’ve used products like Citrix MetaFrame and Presentation Server before. But additional technologies are now available that virtualize applications outside of the simple presentation mode. The key difference between application virtualization and other virtualization strategies is that the encapsulated application is all done on the client, from the processing standpoint. There’s no background server providing the processor resources for the virtualized application. However, policies define what applications are to be run on the clients; the package for the application is provided to the client, and that environment is virtualized locally. In this fashion, there is no central collection of hardware resources to deliver the application.
#9: Beware of virtual machine sprawl
The growing popularity of virtualization may introduce a new phenomenon — virtual machine sprawl. In a way, this is accelerated by the wonderful tools available to help organizations migrate to virtual environments. Physical-to-virtual (P2V) conversion tools allow administrators to take servers to the virtual environment easily, and it may become tempting to omit the decision process of what systems need to go and what need to stay. The other half of this situation is that if we are challenged to review carefully what physical systems need improvements in their operating system environment before migrating to the virtual environment, the tasks may never be completed.
#10: Many things will require rethinking
Depending on the scale of your virtualization implementation, some elements of your infrastructure will need to be revisited. Topics such as backup and restore, storage management, network connectivity, and the server build process will all need addressing before moving to the virtual world. All hassles aside, it is clearly a positive direction for many situations to utilize hardware efficiently, meet disaster recovery requirements, save on server hardware, and increase the level of central management.
For small shops, the approach is different than for large enterprises. How has your organization approached virtualization? What have you learned in taking the virtualization plunge?