Server huggers — you can spot them from a mile away. They will make any excuse to not move workloads into virtual farms, even when the farms are designed and proven to run tier 1 workloads that include Oracle, SAP, and Exchange. And, when the workloads are successfully migrated, the application owner can be reluctant to acknowledge the success and power of the virtual platform.
This year I asked myself, "Am I a server hugger?"
Every few years, some of my peers and I explore the options for a home virtualization lab. The challenges remain the same every year: How do you get meet power, CPU, memory, and storage requirements at a price that your significant other won't veto?
Some of the labs I run in my home lab have been ambitious. Some of the larger ones have included a three node vSphere clusters with full high availability (HA) enabled, a full Virtual Desktop Infrastructure (VDI) deployment, and a full OpenStack deployment. However, the truth is that I only need this type of power a couple of times a year. The lab required to run this setup is a significant investment. If presented with this use case in the enterprise, I'd recommend a cloud-based solution that allows more efficient use of cash and the elasticity needed to support the large lab scenarios.
If you are a data center engineer and you are looking for server lab options, read about several that are simple to deploy. These options give you all the power needed to run the most complex labs imaginable, while keeping you under budget.
Bare metal power
We'll address the elephant in the room first: vSphere. The obvious question is: How do you test various labs that require the actual hypervisor as the target of the lab? Identifying a cloud provider that rents bare metal hardware is the solution to the challenge of running labs that require physical hardware.
One of the more home-lab friendly options is baremetalcloud; the team will rent you physical servers by the hour. This service is ideal for those looking to run complex vSphere labs. One scenerio is traditional nested virtualization labs for which baremetalcloud has Autolab templates ready to deploy.
Another option is if you want to run large labs between physical hosts or even run a SAP Hana database that requires a large amount of physical memory.
AWS, Google, and OpenStack
If you are looking just to deploy OS level labs, public cloud providers are a great solution — you can go to AWS, Google, and Rackspace directly. There is a learning curve associated to using these services; it's especially difficult if you want to run traditional applications in your lab.
Another option is to go through a cloud broker such as Ravello Systems, which has a nested hypervisor solution that's vSphere compatible. The solution allows you to run vSphere virtual machine (VM) images directly on several public cloud providers without making changes to the VMs. Unlike Amazon's VMware import utility, you can upload the VMDK file to Ravello Systems and run it directly on the AWS Cloud. Based on the profile of the system uploaded, Ravello will configure all of the networking needed to run the application in the public cloud.
It's time for data center engineers to start practicing what they preach by adopting cloud technologies in their home labs.
If you are a networking professional, you get a pass for now. Unless network virtualization and software-defined networking (SDN) become the default level at which enterprise engineers work, then there will still be a need for physical routers and switches in a home lab.
Keith Townsend is a technology management consultant with more than 15 years of related experience designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and a MS in information technology from DePaul University.