PCs

Virtualize everything (except in these four scenarios)

Virtualization should be the de facto standard in deploying workloads today. However, there are valid exceptions. Rick Vanover lists four of them in this blog post.

Don’t get me wrong, I’m 100% pro virtualization. But, I do realize that there are some scenarios where a virtual machine isn’t the best solution. I find the use cases fewer and farther between, but they are out there today. Here are a few scenarios where I still see virtual machines not fitting the bill:

#1 Highest performance

As a general rule, the native hardware versus virtual machine performance discussion is a no contest. You “can” make a virtual machine outperform a physical machine running on native hardware; but not in a like-for-like configuration. If you need the utmost performance; a physical machine may still be the way.

#2 Application requirements

Again, I’m a VMware and Hyper-V guy, so I want to see everything end up as a virtual machine. But really – in today’s modern times – do we still have application requirements that don’t support virtual machine configurations? What really needs to change is the application, then. We can do so much more with the application as a virtual machine. While I know there are applications that don’t support virtualization, they are there. Let’s work on removing them.

#3 Application maturity

Just as I critique the application in the previous point, a mature application that is fully stateless is a welcome change to the normal application profile. There are a number of private cloud solutions that simply boot up (usually via PXE) and take on a role in a protected arrangement for a larger application. These larger applications tend to be very resource-intensive and dynamic in nature. Further, the data for these solutions is usually distributed and self-protected to the level of equipment that is in play.

#4 Full separation

One of the critical design elements of virtualization (and any infrastructure technology for that matter) is to introduce as many layers of separation as possible for redundancy. While I’m not a fan of holding on to the single physical domain controller, there are valid physical machine configurations for full separation from the virtual infrastructure. This can include security tools or backups of the virtual machines. In my opinion, the reasons are getting weaker. But, if we deploy solutions on virtual machines that really don’t fit in a virtualized infrastructure; that may be something to avoid.

Do you have situations where you can’t use a virtual machine for new deployments? Share your exceptions to the norm below.

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

42 comments
ataranto
ataranto

Six of us run peer-to-peer just fine today (Windows 7) - why should I consider virtualization? Al

harikadithyan
harikadithyan

Implimenting the VM with autocad and Autocad 3D is mission critical and have lot of issues. we have recently implimented VM enviorment with too many machnes on autocad and 3D application. Still under the process of clearing the issues. the main issue is system hangs while saving 50 or 100 mb drawing file.

jasonpreyes
jasonpreyes

I would add one more scenario... If you cannot afford to make further investments to your virtual environment then keep your implementations limited in scope (this would include investments in time and fiscal resources). Yes, the point of virtualization is to increase physical server density, however every environment I've had the privilege of advancing their "virtual awareness" has had to deal with the mentality that once a servers/services are virtualized, there is no more (or at least significantly reduced) investments needing to be made. Bottom line, virtualization does not warrant a "set it and forget it" mentality. Virtual infrastructure is still just that... infrastructure. As we all should know by now, virtual infrastructure, like physical infrastructure requires care, love and attention. If you can't afford it, don't invest too heavily into it or just as with physical infrastructure??? you will get burned!

tonys3kur3
tonys3kur3

Another crucial aspect of virtualization is for IT admins / organizations to realize that each virtual machine is itself a separate and unique environment. That means that each needs its own security mechanisms in place. Another thing I hadn't considered until I read this paper (http://www.pcworld.com/article/247273/three_steps_to_protect_your_virtual_systems.html) is the fact that each also contains unique data and that organizations need tools in place to ensure that backup and recovery is available for the VMs as well.

StevenDDeacon
StevenDDeacon

Enterprise mission critical applications requiring performance and availability should be isolated as much as possible from other application's maintenance including Hyper-Visor and OS patch updates as well as security requirements. Such mission critical applications should be run as close to machine residence as possible and isolated from firmware, software, networking, and security layers requiring regular configuration changes including updates and patches. Granted, with the advent of hot swappable hardware, dynamic firmware upgrades, and hot patching OS's without outages reduce the probability of a scheduled outage. Never the less, the more levels of virtualization complicate troubleshooting and increase the potential for unplanned outages. Virtualization also makes clustering more complex and adds still another layer of potential failure and complexity for memory and disk storage management, I/O subsystems, networking, and security. Many application services may be virtualized and clustered but Enterprise mission critical applications should be isolated from as many layers of complexity as possible for reliability, serviceability, and availability.

ictsaint
ictsaint

Hello Guys, I have really enjoyed the thoughts and ideas shared so I am throwing in my two cents. As a quick summary we are in the middle of virtualising the majority of our ICT environment and will be keeping some servers as physical in order to avoid problems we experienced with a prototype virtual network, some points of which have been mentioned above (Physical DC/DHCP/DNS, SCSI/SAS Tape Library, Security Systems). We are a small size organisation with 80 users and a two or three man ICT business unit. The new infrastructure was designed based on business requirements meeting with business leaders and business app owners that then formed technical requirements that then form technical specifications for the required hardware. A key ICT goal is to make it easy to support while delivering better performance than at present. Below is a highl-level overview of the HW architecture without getting into the specifics of the configurations: 2x identical blade based racks configured with: Hypervisor: Hyper-V Based - For our organisation there was no right or wrong answer here. Either could do the job we required (we are not really pushing the advanced features provided by either technology just the basics) what is more important is for it to be stable, responsive and easy to support. Our prototype environment had both Hyper-V and VMWare. Even though VMWare is currently the more mature technology we were happy with the capability provided by Hyper-V and have the option to add VMWare at a later date in a mixed environment if required. Networking: VLANs will be put in place to allow for separation of prod and test/dev environments and also to split up Management, Data, Internet and out of band management traffic. Physical Servers: Primary Domain Controller (AD, DHCP, DNS) (1 rack only) - This is to allow authentication even when virtual hosts are down as experienced in a prototype virtual environment Management Server - Tools Box (SAS interface for Tape Library, SCVMM, AV, SAN tools, Backup Server, WSUS) - Support ICT admin goal of minimising the number of servers you have to login to, by centralising as many management tools on a few management boxes as possible Database Server (SQL) - Had advice from MS and SME to go either physical or virtual based on our environment but we decided to stay physical for the time being. Again no right or wrong answer for us here Terminal Server (Windows Remote Desktop Services with Terminal Server Gateway) - Support Flexible Work Arrangement initiative from management (Part time, job sharing, remote employees) Security Systems PCs - Will stay physical until we confirm pass through to VM of proprietary cable connections Virtual Host Servers: 1x primary VM host server hosting: Exchange, Office Communicator, SharePoint, Project Server, IT Job Tracking and Backup VMs: DCs, AV, SCVMM, Backup SW 1x identical VM host server as redundancy for primary VM host server (there is also a second rack that provides total redundancy for the primary rack) For both VM host server the licencing model stacks well and position the organisation to easily and dynamically growth VMs with minimal licence overhead in the future Storage: iSCSI SAN (Will store business apps, VM disks and backups for new backup strategy) NAS (some backups, software library) Peripherals: Tape Library (still maintaining tapes for off-site and archive purposes and to compliment the new backup strategy) Again this approach suits our size organisation and environment, and there are many variations to achieving a similar goal. Good luck everyone planning or performing a virtualisation project.

mkilpatric
mkilpatric

Look, guys, in reading everyone's thoughts above, I am going to be the one to point out the obvious that I posted above. No solution is final, no infrastructure implementation has a perfect solution. You have to take into account the factors of your environment that will affect virtualiztion (note, you can plug any current industry buzz word here, cloud, mobility, etc, etc) from TCO, to operations support, to implementation time, to physical limitations of some hardware, to virtual appliance needs and uses, to DR and Business Continuity plans, etc. As was mentioned above, many of the threads here seem to about why you CAN'T virtualize. Make a list of what you CAN, compare it what you can't, and then, to be real fun? Find a way to make the can'ts a can! There are no limitations to what we as IT persons can design, can find, and can make happen. You might remember the big push in the early 2000's was to virtualize to save money! (Note, this also was the mantra of Linux installation) Well, the big companies got wind of this too, they need to make their money, so they will always find a way to squeeze costs out of licenses, that's what they are expected to do! So, you'r job as the IT admin, manager, architect, consultant, is to work to find a solution that may cost more in one area, but save more in another; whether that be licensing costs, long term depreciation, operational costs (more or less support engineers); etc. The discussion thread was supposed to be around Virtualizing Everything (except in four scenarios).... show how you CAN do it and what the benefit is!

richard.artes
richard.artes

SCVMM for monitoring the virtual environment shouldn't be virtual! We used to have System Centre Virtual Machine Manager on a VM, but we discovered we couldn't manage the virtual environment when there were problems with the SAN. So it's now on a physical box. Much safer!

paul.bainbridge
paul.bainbridge

you need to consider the application licences as these can make a big impact on TCO dependent on the virtualisation vendor. Not every application is supported on every type of virualisation and some require more expensive licences to work on virtual hardware.

BALTHOR
BALTHOR

I'm thinking that these buffers are the CPU's caches.The L1 and L2 cache.If you had access to the CPU in the BIOS you would make these caches as big and as fast as you could.You would even raise the voltage while you're there.I see real high voltages.

BALTHOR
BALTHOR

I see the computer's speed,the speed that you can operate a program.is related to the hard drive's read/write speed.With all these RAM specs and so on I see that it's at kilocycles at best.Try striped drives and you'll see.My example:I have a drive with two partitions C:and D:I copy a two gig file from the C: drive to the D: drive and it does it relatively fast.Granted the drive unit might have multiple disks but I can make my drives any size that I want .The drive's head reads the file from the C: drive then writes the file to the D: drive.It really does this in the real world!The copy/paste bar graph goes smoothly from beginning to end.What I suspect here are chip buffers.The drive copies to the buffer then reads the buffer to paste.So the whole computer works like this,in a buffer state.I suspect that at boot up the drive's entire contents is read to one of these buffer.In the early Walkman you could shake the unit and the disk wouldn't skip.This is buffer.The disk is read into a buffer then played.

louhou
louhou

What we are doing is recreating the mainframe where everything runs inside a large dependent system. And that is the main issue - unless you can ensure that you can start this system in and orderly fashion, manage the separate parts easily, stop and restart areas of it while understanding the impact on other areas you are not ready to virtualise. So for all the organisations running small little SQL clusters and Exchange servers - virtualise away. For the rest of us in the real world where we need to ensure we can manage an ecosystem of dependent virtualised parts which can go up and down, be moved around, patched, updated and has to be isolated and contained, we need better management software before we can jump in.

SaintGeorge
SaintGeorge

Another system to manage. Another link that can fail. I don't need reasons not to do it. I need reasons to do it. And all I hear sounds like a sales pitch.

rmerchberger
rmerchberger

Virtual environments make it extremely hard to access the actual 3D hardware for massively parallel programming; and even where possible it is extremely limited. Keeping it on physical hardware is much less nervewracking... ;-)

wrfinn
wrfinn

VM machines are great but not to serial attached devices such as Data Acquistion devices in an OPC environment.

POCOCity
POCOCity

Definitely have at least one Primary Domain Controller not in your Virtual infrastructure. If you ever have an issue with the virtual world you may find you cannot log into anything if all your Domain Controllers are virtual. We do have one virtual domain controller but we also keep one physical one. This was sort of alluded to with having one machine with diagnostic tools not in the virtual stack, but we actually had a situation where the virtual controller was having a problem, and only the physical domain controller would authenticate any of the login requests.

chuckmba@adelphia.net
chuckmba@adelphia.net

MS SQL should never be on a virtual server. MS SQL is known for being a memory hog. It will use all 16 Gig on a stand alone server. No one could give MS SQL that much memory on a virtual server.

michel
michel

we have some cases in which specialised PCI hardware has to be used.. except for using particularly complex xen configurations, I think it would be a place where virtualisation is not doable, right?

tommy
tommy

As a guy who's getting more and more involved with VM's in the environment, with more of them becoming mission critical, this is exactly the kind of insight I'm keen on seeing. I had alreday come to the same conclusions about a couple of the aspects of machine virtualisation, specifically Highest performance and what's being referred to here as Full separation. I can't see the point in trying to virtualise an application that, sitting on it's own, will max out DAS data pipelines, or consume huge amounts of processing power on a regular basis. I also like have infrastructure like firewalls to be completely separated from the vagaries of virtualisation technologies. If I've got a network related issue that's likely to be firewall related, I don't want to have to start figuring out if it's the virtual switching that's got it's knickers in a twist, before I start looking at the firewall proper. I like the comment from Jim Wilson with regards the requirement for user-level interaction with a particular machine too. I hadn't thought about the need for staff to simply reboot the machine when I'm not here, but it makes perfect sense to be aware of that. Thanks Jim. Having thought of that though, I would extend that premiss for any machine that requires access on a regular basis by not-so-technical contractors or maintenance staff too. I don't want these people to have access to VM level infrastructure management, nor do they need the excuse that it's the VM infrastructure at fault, not their software/systems.

Joe2001
Joe2001

Within the past year, we needed to upgrade our Oracle servers. Since Oracle claimed to support virtualization, it seem like the way to go since we'd had good success with our other virtual servers. When we were ready to install on the new system using our existing licenses, we found out that regardless of how many processors that we configured in the VM guest, we where required to license the total processors/cores in the VM host server. We ended up purchasing 2 physical servers with the same count of processors/cores as our existing licenses for about a fifth of the cost of buying more licenses. I'm not sure how their licensing is now, but until it changes, we will never be able to virtualize our Oracle servers.

jim_wilson_
jim_wilson_

Hey Rick, excellent points. One more that I could add (and again, I'm 100% pro-virtualization so this should be the exception) is the case where an organization might have a crash-prone but important application where the physical machine may need to powered off/on by non-IT staff. I have one such machine running critical lookup software on a read-only CD array and if it goes down say when I'm away or not available, I can give access to that area of the data center where the machine sits for a non-IT staff member to come in an hard power the server off and then on. Again, not ideal but a lot easier than trying to train non-IT staff in the use of the vSphere Client (where they may end up doing REAL damage). Thoughts?

Dereckonline
Dereckonline

Would love to make the jump but: Gateway proxy/anti-spam application still physical as not supported virtually (hello!!) Backup server - PCI based server faxing Still waiting!

bikingbill
bikingbill

Our ISP had a major power outage last month, and it happens that I've just read their final report on the circumstances and remedial actions. Two of the remedial actions they are putting in place are to move their DHCP service back from a virtual machine and onto a dedicated physical server. Their DNS will go on to two dedicated physical servers in two geographically separate data centres. The DHCP server is to maximise the speed of getting DHCP running again from a cold start. The DNS server is for the same reason and also to load-balance. Of course, this is a business decision based on risk versus cost and the business damage to them of failure to adequately server their customers.

dswope79
dswope79

remain where I am shaky on leveraging virtualization. An Engineering based configuration for applications like CAD/Inventor/PDMLink/Pro-E generally requires a front end and back end server. I think I could have leveraged virtualization for this but due to timing I went with two physical servers for this.

jazdad96
jazdad96

This goes along with your statement about backups of vm's. We have a tape library that requires a physical scsi connection to a server. For that reason, we are keeping a physical backup server. We have virtualized all of our dc's. A lot of the security tools now offer virtual appliances, but some things, like security badge devices, require separate physical connections to a server/workstation for the device readers on the doors.

alistairc
alistairc

Realtime-critical applications previously struggled in virtualised environments. For example, certain TTS engines running on Linux SMP VMs on VMware Server resulted in 'jittery' audio being produced; the same timing issues caused certain ASR engines running on similar versions of VMware to struggle to recognise voice input correctly. That was only a couple of years ago, the situation seems much improved in the ESXi 4/5 realm (in conjunction with more recent Linux releases such as CentOS 5/6)... http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf

tommy
tommy

Makes perfect sense, Lou, but are you saying that the tools you describe don't exist yet, or they're not up to the task? I can see the analogy you draw with respect to the mainframe environment, but surely a collection of servers, be they on separate machines or virtual ones, need to be managed in much the same way? There's bound to be a level of interdependence regardless of their physical, or virtual attributes?

Charles Bundy
Charles Bundy

I like the fact that I can have VM's on network arrays with multi physical VM hosts ready to take over if any one fails. Short of someone cutting the cable both storage & CPU are redundant to a ridiculous degree...

tommy
tommy

Here's a few: backup - a file (OK or a number in a directory) = the entire machine. End result? Recovery in minutes not hours or days. Update problems? Another gift from the beta testing department of MS? Simply recover a snapshot you made before you applied the "fix". Experimental systems development; need another server for trials or simply to add resilience and load balancing to infrastructure? Make a copy of a VM rename it and go. What I find quite amazing is what you can do with this technology for free with what's available out there. Try before you buy ;-) P.S. I typed out the above on my phone while heading home. More reasons I thought about last night... Efficient utilisation of hardware: Many servers spend much of their time in a fairly quiet state. One box running many virtual servers better uses the resources available leading to..... Reduction in carbon emissions, and more important to the bottom line, reduced energy costs. Increased flexibility: Server virtualisation allows you to set up a virtual server very quickly from templates, or existing machines that you've spent time building previously. Better availability: With a larger infrastructure VM's can sit on several physical boxes. If one fails then another can automatically take over. Scope change: As the resource requirements for a specific virtual machine change, a virtual server can be upgraded very quickly, more processors introduced, more storage added etc., without much more then a few quick configuration changes. Less Total Cost of Ownership: In a virtual server environment, your initial investment is less because you don't need as many physical boxes. If you've already got existing infrastructure, then savings can be made by virtualising existing machines, rather then replacing them, before they are retired.

Jeff Adams
Jeff Adams

There's nothing wrong with a 100% virtualized Active Directory environment, if you've planned for a robust infrastructure. I manage a multi-child domain forest, with seven domains backed by 24 virtualized DCs in five geographic locations. I've had two Microsoft ADRAP's in the past three years and being 100% virtualized, which is a fully supported configuration, has only been a topic of note to make sure we are aware of the potential issues of virtualization (e.g. be sure to properly size VMs, be aware of storage I/O performance, don't snapshot & revert a DC, etc.).

tommy
tommy

I'd agree with that too. If its All virtual and the virtual goes south then having a simple stand alone to allow authentication to work would seem sensible.

Jeff Adams
Jeff Adams

...if you don't know what you're doing. SQL Server's default configuration is to use all available memory, leaving only about 300 MB free. You can install SQL Server on a server with 16 GB of memory, put a 100 MB database on that server, and SQL Server will use 15+ GB of memory. Bump the server memory up to 32 GB with the same database and SQL Server will use 31+ GB. Bump the server memory up to 64 GB with the same database and SQL Server will use 63+ GB of memory. For a 100 MB database. If someone doesn't know how to size and manage SQL Server, he/she probably shouldn't be running SQL Server. SQL DBAs should know the performance counters to measure and monitor to determine how much SQL Server actually *needs* and limit SQL Server's memory consumption appropriately. Regarding memory allocation to a VM, we've got almost 90 virtualized SQL Server instances on almost as many VMs, and those SQL VMs have anywhere from 4 - 39 GB allocated to them. Outside of SQL Server, we are looking at VMs with up to 192 GB allocated. If your hypervisor (hosted or bare metal) cannot meet your needs (# virtual CPUs, GB of memory, etc.), move to a hypervisor that can.

travis.duffy
travis.duffy

MS SQL performs just fine in a virtual environment providing that you have designed it properly to match the resources and disk I/O needed.

b4real
b4real

I've done well with it, provisioning up to 32 GB of RAM for large datawarehouse DB servers. The key is to have application people watching the SQL performance, and also storage. LOL. It's always storage.

gjacknow
gjacknow

It says your post was 2 hours ago but it sounds like something one would have said 5 years ago. Virtualizing apps like SQL and Exchange is very common now. With an off the shelf 2U server with 12 cores and 144GB ram you can virtualize SQL quiite nicely. I have many 4 core/16GB SQL servers virtualized per VMware host. It fits in nicely with the way SQL CPU/core licensing works to save you a lot of mone if you run SQL Enterprise.

rmerchberger
rmerchberger

Sure, Microsoft says their virtualization product has issues with their SQL product (another case where Microsoft isn't compatible with itself... ;-) ) but I have 2 virtual MSSQL servers that run great... one under VirtualBox and one under VMWare. And if you're serious about virtualization; I would think that one would spec a server with *lots* more than 16G RAM - then there would be no issues with allocating 16G to a virtual machine...

Jeff Adams
Jeff Adams

I work in an Oracle shop, with multiple tiers of their software deployed. Oracle's ELUA requires that all cores that could possibly execute Oracle code must be licensed. So, a two vCPU VM running on a 24 core host requires 24 cores of licensing. If you have 12 such hosts in a cluster, then you have to license 288 cores of the product. They do not accept host affinity rules or core pinning as acceptable measures of limiting where a VM may run. Even deploying a pair of moderately sized hypervisors is a very cost prohibitive option, unless you are able to heavily load a host with Oracle database/application instances. Oracle's philosophy is pretty much "We know we've got you by the sack. If you don't like it, don't use our products, or shut your pie hole." That's all well and good, but it's not always easy to migrate to competitive products, when the cost of the move is millions and millions of dollars of purchasing new products, recoding, retesting, recertifying, etc. It's cheaper to just keep buying small dual-core and quad-core standalone servers to plop Oracle's anti-customer software on.

Neon Samurai
Neon Samurai

with multi-core machines being standard now it's insane to still use core# based licensing. I know I've seen some folks go through a huge amount of gymnastics just to ensure that a VM only touched a single core no matter where it migrated in the cluster; and all to apease idiot licensing price structures. That kind of effort should be focused on things that benefit the end user not apeast some arbitrary administrative trype. Boo Oracle! (of course, they are known for squeazing the customers through licensing terms).

Mark Johnson
Mark Johnson

or you could provide a workstation just outside the secure room with desktop shortcuts to reboot the vm's via locked down scripts. Distinct workstation makes it relatively easy to secure, and means that users only reboot the app when it is really dead because they have to get off their backsides to walk to the machine. And you can use the script to gather an audit trail of who and when (you know the where) requested the restart.

mkilpatric
mkilpatric

The thing to understand is that they are making that decision most likely based on a managerial reaction to an outage. If the proper planning had gone in place, they could have had a backup DHCP / DNS server on a physical box for just an emergency situation. Most companies strive to virtualize to save space, to cut cost, or to deploy a new technology (not really new anymore) for their infrastructure. There will always be exceptions to virtualization, and it is always going to be important to plan from the beginning the major scenarios, capacity sizing, migration strategy; and post implementation administration. I have been consulting on these topics for years now, and my bottom line is that anyone can deploy virtualization, but the implementations that work will be those who deploy an infrastructure solution to meet operational needs, VM's or not. As to the primary topic, it's most important to break an environment up into sections, network, storage, systems (infra.), application systems, databases, mail systems; and then look at each layer's opportunity to virtualize or stay as is. I won't put too many details out here, but analyze before you just P2V something.

Neon Samurai
Neon Samurai

Stick all the secondary DC you want in VMs but when your starting from a cold boot, you want primary DC up and running with no waiting for the VM host OS bootup.

tom.marsh
tom.marsh

Mainly because there are no resources up for that DC to grant you access to--so it doesn't really matter that there's a DC available--so what? With all other hosts virtualized, what are you logging into (besides the DC itself?) In a 100% virtual environment (or 100% virtual minus your DC, anyway) a physical DC is a feel-good-safety-blanket rather than a necessity.

Editor's Picks