10 things you shouldn't virtualize

Virtualization delivers a host of benefits -- but that doesn't mean that everything is a good fit for a virtual environment. Here are 10 things that should probably stay physical.

Virtualization provides a solid core of benefits -- cost savings, system consolidation, better use of resources, and improved administrative capabilities -- but it's important to remember that supporting the goals of the business are the reason IT departments exist in the first place. Virtualizing everything as far as the eye can see without analyzing the consequences is like what comedian Chris Rock said about driving a car with your feet: You can do it, but that doesn't make it a good idea.

The first step in any virtualization strategy should involve envisioning disaster recovery if you put all your eggs in the proverbial basket. Picture how you would need to proceed if your entire environment were down -- network devices, Active Directory domain controllers, email servers, etc. What if you've set up circular dependencies that will lock you out of your own systems? For instance, if you configure VMware's vCenter management server to depend on Active Directory for authentication, it will work fine so long as you have a domain controller available. But if your virtualized domain controller is powered off, that could be a problem. Of course, you can set up a local logon account for vCenter or split your domain controllers between virtual and physical systems, but the above situation represents a good example of how it might be possible to paint yourself into a corner.

In my experience, some things just aren't a good fit for a virtual environment. Here is my list of 10 things that should remain physical entities.

1: Anything with a dongle/required physical hardware

This one is a no-brainer, and it's been repeated countless times elsewhere, but -- like fire safety tips -- just because it may be a well-known mantra doesn't make it less significant. Believe it or not, some programs out there still require an attached piece of hardware (such as a dongle) to work. This piece of hardware is required by licensing for the program to work properly (to prevent piracy, for instance).

Case in point: An HVAC system for a client of mine ran on a creaking old desktop. The heating-and-cooling program required the use of a serial-attached dongle to administer the temperature, fans, etc. We tried valiantly to virtualize this system in a VMware ESXi 4.0 environment, using serial port pass through and even a USB adapter, but no luck. (I have heard this function may work in ESXi 5.) Ironically, this would have worked better using VMware workstation instead of the ESX environment, which did allow the pass-through functionality. But there was little point in hosting a VM on a PC, so we rebuilt the physical system and moved on.

This rule also applies to network devices like firewalls that use ASICs (application-specific integrated circuits) and switches that use GBICs (Gigabit interface converters). I have not found relevant information as to how these can be converted to a virtual environment. Even if you think you might cobble something together to get it to work, is it really worth the risk of downtime and administrative headaches, having a one-off setup like that?

2: Systems that require extreme performance

A computer or application that gobbles up RAM usage, disk I/Os, and CPU utilization (or requires multiple CPUs) may not be a good candidate for virtualization. Examples include video streaming, backup, database, and transaction processing systems. These are all physical boxes at my day job for this reason. Because a virtual program or machine runs in a "layer" on its host system, there will always be some level of performance sacrifice to the overhead involved, and the sacrifice likely tips the balance in favor of keeping it physical.

You might mitigate the issue by using a dedicated host with just the one program or server, but that detracts from the advantage of virtualization, which allows you to run many images on a dedicated physical server.

3: Applications/operating systems with license/support agreements that don't permit virtualization

This one is fairly self-explanatory. Check the license and support contract for anything before you virtualize it. You may find that you can't do that per the agreement, or if you proceed you'll be out of luck when it comes time to call support.

If it's a minor program that just prints out cubicle nameplates and the support agreement doesn't cover (or mention) virtualized versions, you might weigh the risk and proceed. If it's something mission critical, however, pay heed and leave it physical.

Which brings me to my next item…

4: Anything mission critical that hasn't been tested

You probably wouldn't be likely to take your mortgage payment to Las Vegas, put it down on at the roulette table, and then bet on black. For that matter, you definitely wouldn't gamble it all on number 7. The same goes for systems or services your company needs to stay afloat that you haven't tested on a virtual platform. Test first even if it takes time. Get a copy of the source (use Symantec Ghost or Acronis True Image to clone it if you can). Then, develop a testing plan and ensure that all aspects of the program or server work as expected. Do this during off-hours if needed. Believe me, finding problems at 11 PM on a Wednesday night is far preferable to 9 AM Thursday. Always leave the original source as is (merely shut it off, but don't disconnect/remove/uninstall) until you're sure the new destination works as you and your company anticipates. There's never a hurry when it comes to tying up loose ends.

5: Anything on which your physical environment depends

There are two points of failure for any virtual machine -- itself and its host. If you have software running on a VM that unlocks your office door when employees swipe their badges against a reader, that's going to allow them in only if both the VM and its parent system are healthy.

Picture arriving to work at 8 AM Monday to find a cluster of people outside the office door. "The badge reader isn't accepting our IDs!" they tell you. You deduce a system somewhere in the chain is down. Now what? Hope your master key isn't stored in a lockbox inside the data center or you'll have to call your security software vendor. Meanwhile, as employees depart for Dunkin' Donuts to let you sort out the mess, that lost labor will quickly pile up.

It may not just be security software and devices at stake here. I have a client with a highly evolved VMware environment utilizing clustering and SAN storage. And yet if they clone four virtual machines simultaneously, their virtualized Exchange 2010 Client Access Server will start jittering, even though it runs on another server with a separate disk (datastore). That server is being converted to a physical system to heal the issue. Yes, there is probably further tweaking and analysis that could be done to fix this, but in my client's view, solid Exchange connectivity is too valuable for them to experiment behind the scenes and hope for the best.

6: Anything on which your virtual environment depends

As I mentioned in the introduction, a circular dependency (such as a virtual domain controller being required to log into the virtual environment) puts you at a great risk once the inevitable downtime arrives -- and yes, even in clustered, redundant environments that day will come. Power is the big wildcard here, and if you live in the Northeast like me, I bet you've seen your share of power outages spike up just over the past five years.

I grouped this separately from the previous item because it requires a different way of thinking. Whereas you need to figure out the layout of your physical environment to keep the video cameras up and running, you need to map out your virtual environment, including the host systems, virtual images, authentication, network, storage, and even electrical connectivity. Take each item out of the mix and then figure out what the impact will be. Set up physically redundant systems (another domain controller, for instance) to cover your bases.

7: Anything that must be secured

This is a slightly different from rule #5. Any system containing secure information that you do not want other staff to access may be a security risk if virtualized. You can set up permissions on virtual machines to restrict others from being able to control them, but if those staff members have the ability to control the host systems your controls might be circumvented. They might still be able to copy the VMware files elsewhere, shut down the server, etc.

The point of this is not to say you should be suspicious of your IT staff, but there may be compliance guidelines or regulations that prohibit anyone other than your group from maintaining control of the programs/data/operating system involved.

8: Anything on which time sync is critical

Time synchronization works in a virtual environment -- for instance, VMware can sync time on a virtual machine with the host ESX server via the VMware tools application, and of course the operating systems themselves can be configured for time sync. But what if the operating systems forget or the host ESX server time is wrong? I observed this latter issue just a few weeks back. A set of virtual images had to have GMT for their processing software to work, but the ESX host time was incorrect, leading to a frustrating ordeal trying to figure out why the time on the virtual systems wouldn't stick properly.

This problem can be reined in by ensuring all physical hosts use NTP to standardize their clocks, but mistakes can still occur and settings can be lost or forgotten upon reboot. I've noticed this happening on several other occasions in the VMware ESX realm, such as after patching. If the system absolutely has to have to correct time, it may be better to keep it off the virtual stage.

9: Desktops that are running just fine

In the push for VDI (virtualized desktop infrastructure), some companies may get a bit overzealous in defining "what should be virtualized" as "anything that CAN be virtualized."  If you've got a fleet of PCs two or three years old, don't waste time converting them into VDI systems and replacing them with thin clients. There's no benefit or cost savings to that plan, and in fact it's a misuse of the benefits of virtualization.

It's a different story with older PCs that are sputtering along, or systems that are maxed out and need more juice under the hood. But otherwise, if it ain't broke, don't fix it.

10: Anything that is already a mess… or something sentimental

On more than one occasion I've seen a physical box transformed into a virtual machine so it can then be duplicated and preserved. In some situations, this has been helpful; but in others it has actually led to keeping an old cluttered operating system around far longer than it should have been. For example, a Windows XP machine already several years old was turned into a virtual image. As is, it had gone through numerous software updates, removals, readditions, etc. Fast forward a few more years (and MORE OS changes) and it's no surprise that now this XP system is experiencing strange issues with CPU overload and horrible response time. A new one is being built from scratch to replace it entirely. The better bet here would have been to create a brand new image from the start and install the necessary software in an orderly fashion, rather than bringing that banged-up OS online as a virtual system with all of its warts and blemishes.

The same goes for what I call "sentimental" systems. That label printing software that sits on an NT server and has been in your company for 15 years? Put it on an ice floe and wave good-bye. Don't be tempted to turn it into a virtual machine to keep it around just in case (I've found "just in case" can be the three most helpful and most detrimental words in IT) unless there is absolutely 0% chance of replacing it. However, if this is the case, don't forget to check rule #3!

Bonus: The physical machines hosting the virtual systems

I added this one in tongue-in-cheek, fashion, of course. It's intended to serve as a reminder that you must still plan to buy physical hardware and know your server specs, performance and storage needs, network connectivity, and other details to keep the servers -- and subsequently the virtual systems -- in tiptop shape. Make sure you're aware of the ramifications and differences between what the hosts need and what the images need, and keep researching and reviewing the latest updates from your virtualization providers.


As times change, these rules might change as well. Good documentation, training, and an in-depth understanding of your environment are crucial to planning the best balance of physical and virtual computing. Virtualization is a thing of beauty. But if a physical host goes down, the impact can be harsh -- and might even make you long for the days of "one physical server per function." As is always the case with any shiny new technology (cloud computing, for instance), figure out what makes sense for your company and its users and decide how you can best approach problems that can and will crop up.

Also read

  1. Virtualizing the enterprise (ZDNet special report page)
  2. Executive's guide to virtualization in the enterprise (free ebook)


Scott Matteson is a senior systems administrator and freelance technical writer who also performs consulting work for small organizations. He resides in the Greater Boston area with his wife and three children.


Great advice scott im taking my time at the moment myself and avoiding the VM rush.


As the DBA for our company I am strongly opposed to virtualizing our SQL Server Instances that handle mission critical databases such as our primary accounting DB, the DB that holds the data that is the reason for our company to exists.In 2013 I lost that battle to the IT Admin who convinced the CIO that all valid concerns with virtualizing the SQL Server instance would be negated by setting up the virtualization with a single host that would server as the server/system for the SQL Server instance. After reading reason #2 above in this article I wonder if I should revisit this fight and once again insist that our SQL Server not be virtualized.

Any thoughts from those with experience in the virtualizing of SQL Server?



Thanks for the article.  I do have some counter points that I would like to share from my own experience.  

We are a fairly large company and have virtualized systems with Dongles for years and had no issues whatsoever.  We have been running production systems with IP based USP Dongles since ESX 3.0.  It can be done and done well if you have the proper resources.  Another thought on the extreme performance.  Virtualization has come a long way.  Extreme performance machines can operate flawlessly in a virtualized environment from what I have seen.  Maybe I haven't seen enough but from what I have seen, Virtualization not an issue for high performance.

Probably should change the name of the article to :  "10 Things that can be a Challenge to virtualize"



Even in the absence of pass-through, Network-based USB hubs have been around before passthrough was available.  It was interesting because the first server I virtualized using an AnywhereUSB hub was an HVAC server with a USB dongle running Windows 2003.  Digi's product line for USB hubs has matured quite a bit and I have found them to be reliable products.

Playing Devil's advocate, let's say a USB Hub would not have worked in your scenario.  Fine.  That does not provide you the justification to say you should not virtualize ANYTHING with a dongle.  In my own personal experience, I have performed P2V's on dozens of servers with USB dongles without issue.  If one has concerns about this process, do it during a maintenance window, and test the dongle on the virtualized server.  If it doesn't work, power your old server back and blow away the VM. 

"Even if you think you might cobble something together to get it to work, is it really worth the risk of downtime and administrative headaches, having a one-off setup like that?"  Assume you are an Engineer in a Fortune 500 company that manages 1000's of servers, of which, 100's might need USB dongles.  In that scenario, yea dude, it absolutely is worth the risk because it will likely save the company up to 7 figures in unnecessary hardware costs.

If you had just suggested, "Hey, be aware you may have some challenges with USB dongles", that's cool, but to suggest that you shouldn't virtualize ANYTHING with a dongle is just plain wrong. 


 @lkarnis Thanks for the comments.  

With regards to #1, we attempted pass-through as I indicated, but this did not work, perhaps due to the age of the component.  I agree this is a solution in other scenarios, but for us it was a no-go and I included it to show that the pass-through solution may not be 100% feasible.  Naynay21, this was not factually wrong.  This was direct experience.

Concerning #2, every server we've virtualized that has a physical counterpart (Exchange 2010, for instance) has run in an inferior fashion performance-wise to its real-life parallel.  Yes, these can be further tweaked to boost available resources, but it's been my observation that an OS with 4 Gb of RAM and quad-core processing still runs slower in a virtual environment than the metal box with the same specs.  Our virtual Exchange servers are therefore used as secondary systems to provide fault tolerance; when we test services by making these primary we generally see complaints from users of slow Outlook response times.

#5 was included to show that physical systems such as those controlling door access can render a space inaccessible if the virtual host is unavailable.  Yes, a power outage would bring the whole site down so there is no difference between physical/virtual in that scenario but my point is that these can be susceptible to virtual host problems if not designed carefully with disaster recovery in mind.  My goal here was to emphasize logical design.  I agree virtual systems can protect you against the physical failure of hosts but this refers to office access itself as an example.

#6 was included as an example of "circular dependency" to avoid.  You are correct that VMs can be configured to autostart and so forth however the point here is to prevent building a configuration with a point of failure which then, as in #5, renders you locked out in an unforeseen circumstance.  Having a physical counterpart to each virtual system, of course, solves the issue.

#7 and #8 are my recommendations based on the best practices in my organization.  I work in a PCI compliant business and #7 is a particular concern.  Drive level encryption protects the data, but if someone has access to the underlying virtual host the services it provides can be impacted.  My security officer may be more overzealous than most, but the video camera server remains a physical box due to his requirement that there be no possibility of tampering, even theoretical.  I would concur that leaving the decision to security personnel may be the best outcome for #7.

I included #9 from a cost savings perspective to show that removing perfectly operable workstations to replace them with thin clients can be a fallacy.  Some vendors will push for EVERYTHING to be virtualized but the point here is that if there's no financial advantage - or there is even a financial liability such as replacing functioning hardware with new hardware that adds additional overhead - this can be a bad idea.  i see it at a lot of work for a little gain in the scenario of a "regular" desktop which is relatively new.

For #10, this appeared because of the tendency I have seen in some organizations to just convert a clunky physical OS to a virtual machine and then keep it around rather than transferring the function or being able to recreate it elsewhere.  My argument here is against just blindly keeping something around in one form or another since "we need it but we don't understand how it works."  Case in point, a "build system" I've seen cloned multiple times from one source with a lot of code set up by former employees.  When the clones of this OS starting having some problems we realized we'd have to rebuild the image from scratch and were at the disadvantage since it had so many cobbled-together parts.  It would have been preferable to recreate and document those moving parts in advance of a problem like this so we could always create a reliable source image.


Just a note that Norton Ghost is not very well supported now.  I find the Paragon suite very useful and so far very effective for Windows 8 desktop PCs.  Support appears to be excellent.  Ref:  Google "symantec doc6337".  That's for the industry version.  The consumer version appears to have hit the buffers at Windows 7.


Agree that these are dated and I disagree strongly with just about every one of them.  Virtualize everything and just make sure that you've paid attention to the security and performance aspects while doing so.  In my book, there is only one thing that can't be virtualized ... the hypervisor.  


Was this article written 5 years ago?  If so, all points are valid.  However, in 2013, it is not in tune with reality for most IT shops.  It isn't just a matter of a difference of opinion.  Many points listed here (especially the dongle issue), are just factually wrong. 


Item 1 - Digi Anywhere USB


Point one is spot on. I used to work with an application that requires a dongle for license control. The device matured from a parallel port type to a USB and even to 64 bit drivers but trying to get it to play nice in a virtual environment was terrible. I could get this to work in many cases but guess what? We bought those from a third-party and they didn't support the use of the device in a virtual environment so we couldn't support clients doing this. Their typical response was that virtualization was where things are headed and while I agreed it didn't matter because this sort of implementation is simply not supported. Fortunately the app was moved to .NET and that removed this issue.


Sorry - Respectfully disagree with many points made in this article:

Item 1 - Anything with a USB/Parallel port dongle? This is simple, use USB/Parallel port passthru. Move the dongle to your VMware ESXi server, update virtual hardware to point at the physical port and - done!

Item 2 - Performance. VMs can now get 64 cores of cycles and 1TB+ memory so CPU/RAM is not an issue. VMs can benefit from bonded 10Gb NIC Teams (Etherchannel or LACP), so network performance is not an issue. And with 8/16Gb fibre or bonded 10GB iSCSI storage performance is not an issue. Actually, in my experience, Windows VMs tend to run faster (network/storage) wise after I convert them to VMs. My guess is that VMware benefits from Open Source drivers which typically outperform closed source drivers (don't agree? Then why do all closed source vendors including MS incorporate open source in their operating systems (remember WinSock which was lifted from BSD?))

Item 5 - Physical Failure. IMHO this is the perfect reason to virtualize. You can clone VMs, copy them to other machines and use them as back ups. You can use real time replication (VMware Replication) to hot clone running VMs to machines (even in different physical locations). You can build HA clusters to recover VMs (in 5 minutes) that die when the underlying server dies. Much rater do that than try to do a bare metal recovery of an Exchange or MS SQL box that failed due to hardware.

Item 6 - Virtual Failure. See item 5. Just as applicable. And, VMware HA clusters can place and restart anything - even vCenter. So there is no problem with AD/DC, DNS, DHCP, etc. If properly crafted, you can reliably recover in minutes without the need for complex/expensive MS Cluster Services.

Item 7 - Securing Access. Learn/Use the permission model. Build a perimeter around your production, backup and test systems. USE DRIVE LEVEL ENCRYPTION to protect against VM theft.

Item 8 - Time syncing. Using NTP on ESXi is a best practice - so just do it. VMware tools is a best practice in VMs and auto time syncs VMs to the ESXi host - so just do it. Point is valid for real time operating systems (not Windows/Linux) because ESXi does not guarantee CPU service unless you use a full CPU core(s) reservations - which doesn't scale if you do it on many VMs.

Item 9 - Desktops. Absolutely, virtualize these. I have moved most of my Windows 7 desktops to VMs. They run great, are easy to back up, are faster than running on aging desktops, are easier to administer, etc. The BIG challenge in virtualizing desktops is complying with Microsoft's punitive license terms for running WXP or W7 in a VM (don't believe me, find/read their license advisor documents - you *must* have one of Software Assurance, InTune or Virtual Desktop Access licensing on top of W7 Pro volume licenses)

Item 10 - Old Systems. Virtualize these. That is the *only* way to keep these workloads running once support for the OS from a hardware perspective dries up (such as support for Windows Server 2000, OS/2, etc.). There is a reason why VMware supports so many legacy OS' - it's because people still (have to) care about these.

Bob Germanovich
Bob Germanovich

girlfriends, social life, ability to get ahold of someone in HR


@BlueCollarCritic  I don't know what the I/O performance is like in your setup, but I find my virtualized SQL server over iSCSI with dedicated redundant GB switches to be fantastic. We don't have a high transaction load though, so YMMV. I feel so much better letting the SAN run snapshots of the db's and logs, and being able to migrate the SQL server to another host to maximize uptime. My only real downtime is updates to the SQL server itself. The SAN could even be mirrored to a 2nd SAN for an extra level.


@naynay421 I'll admit I'm more conservative than most when it comes to one-offs.  I've been bitten by them in many scenarios.  My perspective is that adding another potentially problematic level of complexity to a system which absolutely has to maintain five nine uptime may be dangerous if that system has to be as vanilla and predictable as possible.  My take is more from the smaller company which has to be picture perfect in terms of reliability.  You make good points about the USB hubs and I will agree that saving money in hardware costs can be a very compelling goal.  So, on #1, as long as the configuration works, it is documented and laid out effectively, and there are more advantages than disadvantages to virtualizing this system, I'd say you are absolutely correct that "try it and see if it works, but be aware of any challenges" makes sense.  



Are you the admin of the virtualized systems in your network? I mean no disrespect when I say this, every pro-virtualization tech I’ve spoken to or corresponded with has been one in charge of managing the systems and this pushes virtualization out of self-interest and not necessarily because it’s the right choice over all. 

In our case we uses SQL Server to store the data for our primary accounting data.The system is virtualized however it’s the only host on the hardware (or so I’m told) and so the benefits of balancing/sharing resources between multiple virtualized systems does not apply.As best as I can tell the users of that database gain nothing from the virtualization but the admin does and so this raises the question of whether a system should be virtualized if the only benefits from the virtualization are those of the admin such as being able to restore the system from some crash. While the virtualization of the system makes the admins job easier it provides no benefit to the users and so should the system still be virtualized?


@lkarnis Yes!  Thanks!  This saved me a ton of time disputing almost every point brought up in this article.  Cheers!


@lkarnis Dead on. Saved me a lot of typing. My only other comment for #2 is that it can pay to virtualize even with just one virtual server on a physical box. The snapshots (I use XenServer, don't know what VMWare calls it), backups and ability to migrate a virtual server to another physical box are all important as well. It isn't just about putting a bunch of servers on one box.

Editor's Picks