I am not a fan of virtual desktop infrastructure (VDI) for replacing the full desktop experience for all enterprise users; I believe the technology is a stopgap until applications are web based and light enough to run on any device. I've seen too many VDI deployments fail due to the wrong use case or the overpromise, and thus be a poor user experience. With the number of legacy apps running in enterprises, the death of VDI may be a pipe dream.
I am a fan of VDI when the use case matches the technology. One of the most useful tools in my toolbox is my administration virtual machine (VM) I leave on a cloud service.
I run all kinds of labs as part of my job investigating new technologies. I may test a new hybrid cloud service or install software that turns a cluster of x86 hardware into a SAN. I'm finding that these services run as a cloud-based solution, or I can run them on VMs in an infrastructure as a service (IaaS) provider.
The distributed nature of today's solutions frees up resources for my home lab; also, I can test solutions that scale magnitudes higher than my home setup. With this new computing model comes nuanced challenges.
Bandwidth to the cloud
Lack of bandwidth is one of the challenges. Not every consumer of cloud services has 50Mbps or more of bandwidth to the internet — many cloud consumers still chug along with DSL connectivity. Slower bandwidth situations can make consuming IaaS untenable. For example, if you need to upload a 4 GB OS image to your IaaS provider, it could take hours over a slow connection, if it's possible at all. Getting install media up to the cloud can also be a challenge.
To alleviate the bandwidth issues, I leverage an administration VM or something we'd commonly refer to as a jump server. I use a jump server to administer all of my cloud-based infrastructures. I use management tools such as SSH or a secure VNC application to connect to my jump server. From there I can use the jump server to administer systems residing in the cloud. Then I can leverage the service provider's bandwidth to store and upload installation media and server images. The challenge is getting the jump server to your service provider.
There are two options for deploying your jump server. The first option is to upload an existing image file to your IaaS provider. While it is convenient to use an image you are accustomed to using, you may run into the same bandwidth challenge you are trying to solve. If you have an extremely slow connection, uploading the jump server image may be too painful. Another option is to select a pre-built image from your IaaS provider's service catalog, and customize it to your needs.
What to watch: Licensing and usage overages
- Licensing can be a challenge if you are using Windows. The Windows client OS licenses don't apply for use in IaaS providers — most Windows Server licenses apply to IaaS; you can check with your Microsoft licensing specialist to confirm. Amazon offers a Desktop as a Server (DaaS) solution that may save you time and the licensing headache.
- Usage overages: You don't have to keep your jump server running at all time; I power mine on as needed. I host a Windows Server VM in a Ravello Systems service that costs about $.50/hour, and I only use it for several hours a month. If I left it running for an entire month, it could cost as much as $360 a month.
While tailored to my home lab use case, the solution isn't limited to lab use; I've deployed similar solutions for accessing the tools I need to support clients while on vacation. Since it's cloud-based, I could use almost any machine that has internet access.
I'd love to hear your thoughts. What hacks have you come up with to make remote system administrator easier? Add your tips to the comments section.
- Tests of High Speed Country's new transmitter outperform satellite-based internet options
- New Wi-Fi optimization approach avoids interference and enhances bandwidth
- 10 VDI mistakes to avoid
- Hybrid cloud: What you need to know to explore its potential
Keith Townsend is a technology management consultant with more than 15 years of related experience designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and a MS in information technology from DePaul University.