Virtualization optimize

It's 2013: Are you still virtualizing like it's 2006?

Rick Vanover is convinced that the practice of using virtualization technology in the data center needs to be modernized.

I don’t know about you, but I find myself frequently a creature of habit. I find something I don’t like, solve it with something and then stick to that for what feels like forever. In a way, the same goes for our virtualization practice.

When I first started virtualizing my server infrastructure in the data center, I thought it was a great way to solve the core problems I had at the time. Those problems were space, power, cooling, and system time to market. That was 2006, and I’m sure many of you became heroes in your organization for introducing new ways of delivering a data center infrastructure.

Now, it's 2013. While the speeds and feeds have changed, along with newer editions of the products, have we changed the core of what we do? I don’t think so. Let’s take a look at the basics; sure we’re using templates and newer operating systems. We also may be using newer ways of doing things, like the appliance model. This is popular in VMware environments and will be the direction going forward -- delivering services like vCenter as an appliance rather than an application that is installed in Windows. I like that for core infrastructure services like vCenter Server.

But what about the bigger picture? Sure, the buzzwords today are the VMware Software Defined Data Center or Software Defined Storage or Software Defined Networking… and that list goes on and on. But there is something to think about in detail here.

Let’s look at the core virtualization practice. Whether or not we are going down next-generation (simpler term than software-defined) storage or networking paths; we still have the core infrastructure to deal with. Are we using management techniques like VMware’s vCloud Director? Are we letting departments deploy their own virtual machines? Are we pooling resources on larger constructs and setting precise consumption, performance, and availability guidelines for them? Chances are we’re not doing this yet.

I’m convinced we are quite overdue to modernize our approach to delivering infrastructure. In the vCloud Director example, it’s important to note that you don’t have to be a large enterprise or service provider to use it. I’ve been evangelizing vCloud Director (here’s a nice overview of vCloud Director) a bit recently in my professional virtualization practice and have realized that anyone who deals with these challenges is a candidate for vCloud Director:

  • Departmental datacenters

  • Departments who have “hybrid” infrastructure and application people

  • Development teams who need temporary test systems

  • If there are “pockets of infrastructure” existing with unclear ownership

There are, of course, scores of additional use cases, but I’ve identified these because I’m sure you may have dealt with some of them in your own environment. And modern infrastructure management techniques like vCloud Director can aid your practice today. The same goes for the Hyper-V realm. System Center Virtual Machine Manager and the Cloud OS are forming offerings to address this challenge as well.

Is your virtualization practice as up-to-date as the technology? I’m convinced many of us are missing out. Management techniques like vCloud Director are part of the story, but also the components such as networking and storage are ripe for a new look today. What’s your take? Is it worth the training curve and complexity increase for administrators? Chime in below.



About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

0 comments