Hardware

One IT pro's predictions for virtualization in 2012

Virtualization continues to increase our potential as infrastructure administrators. In this post, Rickatron makes some predictions for virtualization in 2012.

I don’t know about you, but virtualization has been on an exponential curve for me and my career. I first was exposed to virtualization 1999, and it was with VMware Workstation. From there, I’ve developed with almost every VMware product and expanded my virtualization prowess to Hyper-V and other technologies. Virtualization took off for me in 2007 when ESX 3 came into my responsibility set. Since then, life hasn’t been the same.

Looking forward to 2012, I think we’ll see a number of things happen in virtualization. Primarily, I think the leadership gap between VMware and Microsoft will be shortened with the increasing features brought by Windows Server 8 and Hyper-V 3. Make no mistake, VMware will push the envelope and drive innovation in the industry, but also function as a target. Quoting many IT pros with more experience in this game than me, long term bets against Microsoft are not a good idea.

Should the gap between Microsoft and VMware lessen, my next prediction is set up nicely. That prediction is that the actual technologies in play become less important in favor of sophisticated management. The ability to fully support a virtualization environment will be a newfound priority as we go forward. Oddly, this is somewhat a restatement of some of our initial benefits of virtualization by abstracting hardware from operating systems. Full management (such as application to metal for virtualized environments) will come to light in 2012, I predict. In the end, it’s not really about what specific technologies are used to get our jobs done, but what applications are in place to satisfy our stakeholders. The framework to get it all connected is the key.

Lastly, I think that the goal of being 100% virtual will take a back seat to reality. I’m convinced that we will always have the “bricks in the datacenter” that we can’t virtualize. Sure, 100% virtualization would be nice, but I don’t think it will be attainable to everyone, and we shouldn’t stress ourselves trying to get there.

What do you see for 2012 in virtualization? Share your comments below.

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

8 comments
SmartyParts
SmartyParts

My crystal ball... New ideas and products on backing up data are making old-school tape less and less financialy sound. Using virtualization to place back ups offsite will eliminate tape backups for most companies in the 5-10 year time frame and 2012 will be the turning point for this. Using virtualization to manage backups gives point-in-time recovery for files and allows for geographic separation without having to shuffle tapes around. Tapes will fall to something like yearly backups or other less frequent intervals and will eventually go away. Storage Virtualization will lessen the vendor lock-ins we have today and simply make storage a commodity that we provide based on performance instead of hard cabling. We can group high-speed SAN storage from multiple vendors, then group NAS-type storage from multiple sources, and group cloud storage. When a new project needs storage and it gets allocated by performance needs with the virtual layer decides where it actually lives. I'm still new on the concept but I certianly see myself getting my mind wrapped around it for 2012. My Nerdly-Arts senses are tingling for the 2012 year.

hydroment
hydroment

Not all but a great deal of the discussion in these forums is concerning larger companies with dozens or hundreds of computers and users. I see virtualization moving to the smaller shops with only a few or a dozen required or desired machines. These shops haven't the budget for upgrading several machines to be current, affording an IT guy or the expertise themselves to maintain their infrastructure. Earlier last year I set up a print shop for a couple of brothers in business making signs. They were several years outdated in hardware and software. Clients would send them files they couldn't even open to view let alone manipulate. What we did was install a custom built computer capable enough to serve 3 or 4 virtual desktops, as well as a couple other supplementary vm's. We utilized the existing 3 older machines as dumb terminals, leaving the workload on the server. The older machines were well capable under the surface or in the background to fulfill tasks such as print, network storage, email, web and ftp. Only purchasing one computer w/OS and reusing OS licenses along with Linux and open source the upgrade was much more affordable. The existing desktops were virtualized in the background while normal day to day operations still occurred. The virtualized desktops on the new hardware operated at 2 to 3 times faster than on the older hardware, The backup is centralized and redundant. They can log in from their smartphones to check mail, or retrieve files while on location. IT support can now be handled from anywhere. Long story short, this is what I see to come. Farther down the road, I see windows being steered in this direction. A type I hyperviser loading the virtualized installation of windows right off the setup disk.

sysxadmin
sysxadmin

I have been at companies where infrastructure was implemented in a 'DIY' and I had a negative mindset towards VMware. It was only when I went to another company and I helped implement the NEW infrastructure from the ground level correctly. VMware is like GREAT if set up properly, they have detailed documentation and best practices of how to do it. The problem is when someone thinks they are more intelligent and they try to make themselves irreplaceable. They create a nightmare for someone else, basically a hacked up mess. I cleaned up a fiasco of Linux distro's with custom compiled packages, why would anyone put something like that in place that CANNOT be updated without breaking!!! People need to get out of the mindset they cannot be gotten rid of because they implemented the infrastructure at a company. No one is irreplaceable, I have seen people with 100 times more skills get let go. No company wants a 1 man show, it is a job to eat not to live by. I take the time to read the white papers/best practices and evaluate the product before purchasing. Try before you buy, because not all cutters are the same. In the end the upper management/owners or whoever will find out if something was put in place (by the seat of their pants). As with IT it is also changing, if you have a welded in non-flexible solution it will bring your own demise.

tom.marsh
tom.marsh

For example, high-volume environments probably won't be able to virtualize their load-balancing anytime soon. The bottlenecking at the network layer would mostly negate the benefits they get from load-balancing in the first place... But most "servers" will be virtualized except for the absolute largest environments where individual OS instances exceed the limits of current VM technology, for example >1 TB RAM or more than 32 vCPU per guest. Exceptions? The only ones I can think of are appliances not offered virtually (fewer and fewer,) capacity-of-VM-tech-per-instance, and superstition (i.e. "Have to have" a physical Windows DC laying around somewhere...)

bkindle
bkindle

I was always told that it's not a best practice to have a virtual primary DC to rely solely on, and that it was always good to have a physical DC to compliment the virtual DC for redundancy both physically and virtually. Am I wrong?

tom.marsh
tom.marsh

It depends largely on how well your virtual infrastructure is constructed. if you're in an environment where you've dealt with your single-points of failure as pertains to your VM solution, a physical DC is really optional based on the admin's preference. Of course, if you have an extra box with a Windows server license handy anyway, there certainly isn't any harm from having a physical DC. If your VM environment is properly configured and outfitted for true high-availability, your physical DC is really just a feel-good, because you'll array your virtual DCs on different hosts such that you never end up with ZERO DCs running in a failure scenario, with the exception of a sitewide power failure, fire, or the loss of your entire SAN. However, in those scenarios your physical DC won't help you anyway, since it won't have power or will have been melted by the heat from the fire. In the SAN loss scenario your physical DC is still of minimal utility because the resources it grants you access to aren't available either. If you have a crappy VM environment (i.e. only one host, no high-availability, and/or no "real" shared storage) then a physical DC remains a good idea, since you're far more likely to suffer a catastrophic outage of your environment. But "keep a physical DC running" hasn't been a "universal, required best practice" since the advent of High Availability and automated failover solutions.

dswope79
dswope79

And that is exactly how I have my DC's structured across three XenServer hosts with HA enabled. There is no need to keep a physical DC in my situation. l also agree that while we may want 100% virtualization. In the 5yrs I have been working with virtualization technologies. I am not a 100% virtual environment, more like 95% with a few physicals sitting around.

bkindle
bkindle

Excellent reply, thank you for the explanation. I'm still in the trenches where places don't want to do things the right way, and prefer the crappy way since it's cheaper. Oh well, pay me now or pay me later!

Editor's Picks