Hardware

10 MORE things you should know about virtualization


Virtualization has probably affected every IT environment to one extent or the other. Whether you do casual virtualization for test environments or complete virtualization for all systems, there's no one-size-fits-all solution. Here's a rundown of the things you need to know about the whole virtualization space.

Note: This information is also available as a PDF download.

#1: Virtualization is more than just VMware

Sure VMWare is the current leader, but it has company in the server virtualization space as well as desktop virtualization. The newest player is Citrix XenServer. The XenServer Enterprise platform is quickly gaining features and management offerings rivaling those of VMware Virtual Infrastructure 3 (VI3), based on ESX 3.5 and Virtual Center 2.5. The Hyper-V virtualization hypervisor is also going to be a player when Windows Server 2008 is released. Hyper-V will provide a similar offering to VI3 from the Microsoft perspective. The Hyper-V virtualization platform on Windows will also offer some desktop virtualization options that supplement the server virtualization platform.

#2: Storage and networking will be your biggest pain points

Planning a server virtualization implementation of any scale will require a lot of planning in the areas of storage and networking. In a server virtualization strategy, the migration from local storage to shared centralized storage takes adequate sizing and planning. Further, administrators will be challenged to rethink the provisioning of virtual servers. For example, in using VMware ESX virtual server environments, the virtual hard disk size is allocated entirely when the virtual machine is created. Therefore, if a Windows virtual server has 50 GB assigned to the virtual hard drive in the virtual machine inventory yet uses only 15 GB on the virtual file system, the other 35 GB will be claimed by this system on the storage available to ESX.

For larger implementations, the virtualization administrator is not in charge of the storage. Many storage administrators will identify the base requirement, add a small amount (maybe 10% to 15%), and if more is needed later, add it as required. This is an inconvenient shift for most administrators but an efficient use of the storage on the central storage systems. Storage area network (SAN) systems, such as the IBM SAN Volume Controller and EMC ControlCenter SAN Manager, are expensive, and storage administrators are challenged to use these resources in the most efficient manner possible.

Networking virtual environments poses another set of issues. When considering a virtualized server environment, management strategies are adapted to reflect additional connectivity requirements, high availability, and virtual switching. Planning the adequate cabling requirements, virtual LAN (VLAN) assignments, and redundancy is a step that in my experience could always use another pass to ensure all connectivity requirements will be met in a redundant fashion.

#3: Don't underestimate the value of the free tools

Free virtualization products, like VMware Server, Citrix XenServer Express, and Microsoft Virtual Server 2005, provide a great way to get exposure to virtualized environments for basic testing and performance benchmarking. Another popular technique is to use free tools for remote systems that can't be run centrally. Having a single physical server with a free virtualization product running a small number of virtual machines is a solid strategy for situations where a robust virtualization solution would be impractical.

The free products generally lack the management tools that accompany the full enterprise suites; however, tools can be purchased to provide additional management options for the free products. For example, consider Virtual Center for VMware Server to manage the free virtualization engine.

#4: Management tools are key

Basic virtualization technology, in my opinion, is becoming a commodity that will eventually be more dependant on hardware resources than on virtualization hypervisor technology. The management tools will be the driving forces in virtualization technology decisions. The packages that offer the most options in storage and networking management, machine migration, high availability, and efficiency configuration options will be factors that decide what packages will be used.

#5: The operating system may go away

Virtualization platforms may not even have an operating system in the foreseeable future; in fact, this is already here. VMware's ESX 3i offers the same functionality as the fully installed ESX 3 but within a 32-MB footprint. It will soon be available as an integrated option within server systems. This will reduce risks of the installed operating system providing any security issues and will channel all configuration of the host system to the management package.

#6: Virtual appliances rock!

Virtual appliances (VAs) make up a new space that has emerged as virtualization has become more popular. The VA model is simply a purpose-built virtual machine that provides a canned set of functionality from the start. VAs are available to provide DHCP roles, provide chargeback to virtual environments, act as Wiki servers for intranets, and to fulfill many other purposes. VMware's Virtual Appliance Marketplace will have some company, as current VAs are adding support for Citrix XenServer and other virtualization platforms.

Many virtual machines are available for free with open source applications and free operating systems. The VA model can be a big aid in bringing specific functionality to your infrastructure without additional licensing or hardware costs. Many VAs also work on the free virtualization products, so you don't tie up expensive hardware resources on your enterprise virtualization system, should you wish to conserve availability.

#7: Virtualization can benefit the desktop

Do you have a large number of like-configured desktops? If so, you may want to consider a desktop virtualization solution. These solutions allow administrators to have a new level of granular control of the installed inventory, permitted hardware accessibility, and network connectivity. Desktop virtualization also makes reversion back to the base image a snap. No longer will a trip up to re-image and re-personalize a system be required.

Some of the desktop virtualization packages also manage storage very efficiently. Imagine providing a virtual desktop to 1,000 computers, but instead of hosting an image of the base install for all of those computers, the virtualization package manages only the change in storage. For most situations, that will be simply the profile and current usage data. And in this situation, the backend storage requirement for 1,000 virtualized desktops is very small, considering the number of systems being hosted.

#8: Take advantage of application virtualization

Application virtualization isn't new to you if you've used products like Citrix MetaFrame and Presentation Server before. But additional technologies are now available that virtualize applications outside of the simple presentation mode. The key difference between application virtualization and other virtualization strategies is that the encapsulated application is all done on the client, from the processing standpoint. There's no background server providing the processor resources for the virtualized application. However, policies define what applications are to be run on the clients; the package for the application is provided to the client, and that environment is virtualized locally. In this fashion, there is no central collection of hardware resources to deliver the application.

#9: Beware of virtual machine sprawl

The growing popularity of virtualization may introduce a new phenomenon -- virtual machine sprawl. In a way, this is accelerated by the wonderful tools available to help organizations migrate to virtual environments. Physical-to-virtual (P2V) conversion tools allow administrators to take servers to the virtual environment easily, and it may become tempting to omit the decision process of what systems need to go and what need to stay. The other half of this situation is that if we are challenged to review carefully what physical systems need improvements in their operating system environment before migrating to the virtual environment, the tasks may never be completed.

#10: Many things will require rethinking

Depending on the scale of your virtualization implementation, some elements of your infrastructure will need to be revisited. Topics such as backup and restore, storage management, network connectivity, and the server build process will all need addressing before moving to the virtual world. All hassles aside, it is clearly a positive direction for many situations to utilize hardware efficiently, meet disaster recovery requirements, save on server hardware, and increase the level of central management.


For small shops, the approach is different than for large enterprises. How has your organization approached virtualization? What have you learned in taking the virtualization plunge?

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

16 comments
patrick
patrick

"With all these options, taking the plunge into virtualization can be a big and confusing step." You forgot costly too!! SHEESH!

horationelsongreat
horationelsongreat

One of the real advantages of virtualization that you is not emphasized as much as it could be in this article is avoiding simply virtualizing big, bloated, over-provisioned, hardware servers with generational accretion of software bits of unknown provenance. Rather you can attain true server agility because you are doing "Zero to Virtual"; building from component libraries straight to virtual servers in any VM format, with no initial physical footprint. The big win here is you are able to do "lean" provisioning. You use a small footprint OS (one you have configured, or Red Hat AOS, or Ubuntu JEOS, or one of the small footprint OS's for use in virtualized servers) as the base of the server. This means smaller surface area of attack vulnerabilities, streamlined administration, no over-provisioning of then unused commercial licenses, and more virtual servers per physical hardware. A knock-on effect of these lean machines is greater mobility, more easily allowing leverage of utility or cloud infrastructures. Companies such as CohesiveFT (see http://blogs.zdnet.com/SAAS/?p=461 )allow you do do build this lean, component sourced "application stack" for free and deploy to a virtual environment of your choice (VMware, Xen, Parallels, or cloulds like EC2)

giorgio.grillini
giorgio.grillini

What do you think about Virtual Iron as an emerging alternative to VMWare and Citrix? It's based on Xen Hypervisor and has nice feature/price ratios. Moreover has a simplified storage and management architecture and JAVA API to manage all VMs in your datacenter.

abeeber
abeeber

Virtualization encompasses more that OS/Server type appications. NAS virtulization by Acopia/F5 Networks can stream line CIFS/NFS presentation to the network and aggregate files across a diverse share farm. Compellent SAN's offer a cheap way in provisioning disk to a VM ESX system and offers the ability to dynamically allocate more capacity to a VM as needed.

ndekwe
ndekwe

Tuning patches in virtual environment will be the most cumbersome

joey
joey

What about products like Virtuozzo? This product seems to have the Virtual Environment already wrapped up. I've been looking at testing the setup. Has this product been reviewed?

gusvaz
gusvaz

I don't agree with this: "Free virtualization products, like VMware Server, Citrix XenServer Express, and Microsoft Virtual Server 2005, provide a great way to get exposure to virtualized environments for basic testing and performance benchmarking" Haw can you do a benchmark over a virtual infrastructure? Can you meassure something than can't be meassure? That's the overhead of the virutal infrastructure. I like Virtulaization like an option for functional testing, but I'm not sure to use them for performance testing.-

TJ111
TJ111

I know the focus of these two posts is based on using virtualization in a server environment, but it does brush on using it on the desktop as well. I think that the VM software 'VirtualBox' deserves a mention, as it is one of the best (free) virtualization products out there (for the desktop). It can run on any host OS, and allows you easily to set up and run guest OS's (usually setting up a new virtual machine takes ~3 min and about 8 clicks). That said, good post on virtualization, its good to have a reference on these sort of things. I find it amazing, when you think about it, we have reached a point in computing where our computers can run multiple emulated computers on top of them (yes I'm aware virtualization has been around a while, its just cool when you stop and think about it).

bill.friday
bill.friday

One of the better top ten list I've seen from TechRepublic. Can't stress enough the complexity (disk space allocation, backups) and issues associated with NAS file store for VMs. We use NeApps and NFS which is preferred over SANS or iSCSI. Keep in mind that NetApps are highly optimized for NFS. I've been told that NFS also out performs FC and iSCSI when handling more that (5) VM's. The reason is that SANS and iSCSI are built on SCSI protocols which switch to a sequential access mode when exceeding (5) concurrent sessions. SANS out performs NFS if only accessing 1-5 VMs.

stewagd
stewagd

OpenSuSE, Debian, Fedora, Mandrake, etc... All of the opensource Linux server environments support VM hosting using the poen source distributions of Xen. And yes, they do come with a hypervisor. Geo

b4real
b4real

Really? ESX and Xen maintenance mode make that a non-issue in my opinion.

giorgio.grillini
giorgio.grillini

That means you couldn't install different operating systems types on the same machine. This kind of virtualization is called single-os , has interesting performance but is not as flexible as VMware ESX or Citrix XenServer

b4real
b4real

The free products let you go for the 'proof of concept'. Too many times non-IT people are timid to make the jump, so proving the functionality on the free platforms is a great introductory point.

michael
michael

Gusvaz (and others), Interesting comment but I'm a bit confused by what you are saying (perhaps you can clarify/educate me). But I guess that I, too, am a bit confused by point #3 as well. Last year I did some consulting work (for ServePath - where I now work) where I tried to benchmarks Grid Series (virtualized hosting) product from the stand point of, how does it compare with a like-configured dedicated server. The results were, after I analyzed them, pretty much what I expected. Everything with the exception of I/O levels were on par with each other. I/O & R/W levels were a bit lower on the virtualized environment. But I agree with the statement that virtualization is a great area for functional testing, especially if you need to constantly tear down and rebuild environments. This "10 things" list is really great! Virtualized hosting will be hot as well as companies really need to look at their carbon off-sets and costs. That is part of the reason why we now offer 2 virtualized server products for hosting (Grid Series and GoGrid).

ITsteve13
ITsteve13

This was an excellent article.

bgray
bgray

I think they're talking about guest OS patching, not the host OS. With "VM sprawl" being an issue (definitely here where I work), patching becomes more cumbersome unless you have a system like Patchlink, MS SMS, et al.

Editor's Picks