Virtualization

Use virtualization to fill niche needs


The company I work for uses virtual machines extensively.  I use virtualization software in some capacity for nearly every project I undertake.  Virtualization technologies have been around since the ‘60s, but its resurgence in the last several years has made it an easily accessible and indispensable tool for IT shops of all sizes, big and small.  And while I don’t get caught up in too much of the hype, I do believe that virtualization has many applicable niche uses.

Virtualization can refer to many things, but I am referring to it in the form of OS and application level simulation – OS virtualization in that it simulates an operating system environment without altering the host system, and application virtualization in that it encapsulates all of the core components needed to run an application within a virtual shell.

If you believe the hype coming from the main camps, companies like VMware, Microsoft, Citrix and Altiris will tell you there are not many systems which can not be virtualized.  To a certain extent it’s true, but I have had one too many negative experiences hosting a production application on Citrix MetaFrame running from a virtual machine on VMware’s ESX Server.  Maybe I was asking too much anyway, but I’ll wait a while before attempting that scenario again.  It can be done, just not quite as reliably and smoothly as I prefer in a production environment.  The biggest issue concerning more complex virtual deployments is pinpointing the source of a problem when one arises.  Is the operating system, application, hardware or virtual platform the problem?  It could be any combination.

Without question, the configuration of test environments is the biggest benefit most companies realize from platform virtualization.  The products available at the prices offered (in some cases free) mean there is absolutely no justifiable reason for not having test servers configured before deploying or upgrading applications in a production environment.  Test systems may not have been possible for smaller companies before now due to the capital required to run a parallel test environment.  But that simply should not be used as an excuse today.

Using virtualization software for testing purposes is a no-brainer.  Where I’ve found the most tangible use is with smaller scale applications.  For instance, a hospital director forwarded a project to me last week for review.  His assumption was that we would spend roughly twelve thousand dollars on a new server to host a small SQL server database accessed by less than twenty users.  But after reviewing the modest system requirements I recommended using a virtual server instead.  The savings will amount to more than ten thousand dollars and a much faster deployment time as we will not have to wait for equipment to arrive.  It is a positive outcome for the end-users, IT staff and the company’s bottom line.

How many dedicated workstations do you have taking up valuable real estate in your data center?  Many systems deployed in a healthcare environment, for example, specify that a workstation or small server be used as a gateway or interface server.  Workstations serving this role rarely reach more than ten percent of their resource utilization.  Virtual machines are perfect for this function, and using them in this fashion has drastically reduced the number of individual workstations housed in our space-constrained data center.

Disaster recovery is another area that is promoting a surge in software virtualization use.  Hopefully, restoring enterprise servers to dissimilar hardware in remote DR locations will be a thing of the past.  Companies like VMware offer software called P2V which can convert your physical servers to virtual replicas.  These replicas can be stored off-site and more easily brought on-line than traditional recovery methods.

 There are many more benefits to using virtualization software.  The market is maturing rapidly as bigger players such as Microsoft gobble up talented smaller companies such as Softricity.  The product offerings will continue to evolve and become more stable and robust.  Management suites already are being released by many outside companies attempting to get in on this growing market.

Chime in and let me know how you’re utilizing virtual software in your environment.

8 comments
Jaqui
Jaqui

Lets think back a bit, to the days when windows 3.1.1 was first released, every company that was using information technology in any way was running mainframes or mini computers with dumb terminals. with windows 3.1.1 the talk became how the pc would replace the big iron. it has not replaced big iron, there are still new bbig iron units being built and sold. It has replaced big iron in most companies, and gotten it into more locations than big iron could have. the virtualisation is another form of stepping back to the sttyle of networking that big iron represents. [ the other new way is the THIN CLIENT Workstation ] for production use, I say forget virtualisation, since the virtual system is still depending in the hardware of the host os, you are not really gaining squat. for testing new software, it has a use, for using as a backup failover server, it may be usefull. [ if the host hardware fails at the same time as the production erver you are still screwed. ] save space in the server room through virtualisation? sure, at the cost of performance and reliability. 12 grand for a server box with an sql server on it? that is insane, it can be done for the 2 to 4 grand range. [ postgresql, open source enterprise class database engine is FREE and far more reliable with ferwer exploits in the last couple of years than any other database engine. ( including mysql ) Postgresql has a pro-active security policy very similar to openBSD, the ONLY OS that has a 10 year record of one exploit only in an out of box production release in 10 years. both use the same open source license, the BSD license, which is a modified MIT license, to make it clear that commercial use of the software is encouraged not prohibited. I can put an AMD Athlon AM2 dual core 64 bit system @ 2800MHz on the counter at $369.00+ taxes CDN. * AMD Athlon 64bit (AM2) Dual Core 2800+ * 80G SATA 7200rpm (8M) Hard Drive * 512MB DDR-II PC4200 RAM * FOXCONN K8M890M2MA-RS2 /6-Channel Audio/LAN * 16X DVD-RW +/- Dual Layer * ASUS Nvidia 6200TC/64256/ 256MB PCI-E * ATX Tower Case with 400W Power Supply * Windows Keyboard & Wheel Mouse * Multimedia Speaker with free os and server software, you make a tidy profit at 2 grand for the system. for a 20 user database, that is more than enough hardware. editing to add: The shop selling the above system: http://www.vastechcomputer.com/

mkirton
mkirton

Yeah but while you rebuild your $369 sql server when it crashes, mine will simply fail over to another host in my VM cluster allowing me to repair the downed server without users losing any productivity. You are correct in that it would be insane to but a single SQL server on a $12k box. You would want to run more VMs on the box.

LocoLobo
LocoLobo

I've only seen Virtual PC at the local community colleges so far. As a teaching tool it seems pretty cool. But there it does use up resources on the machine it runs on. Everything runs a bit slower than it would on a hi performance machine. My question is about the hospital example. You will have to run your virtual server on some kind of hardware. Granted you can load your virtual server on a workstation, but won't that cause a performance hit? In the next paragraph you talk about doing away with the workstations alltogether. How does that work? I'm visualizing a hi performance machine (server) running several "virtual" threads on dumb terminals or something like it. Thanks

DanLM
DanLM

Ok, question. There are many levels of disaster recovery. 1). The place burns down. 2). The server has a serious hardware crash. I'm contemplating setting up a VM machine to run some inhouse SQL db's off the primary os. Production. The virtual os would be MS, which would contain crystal reports only(at this point anyway) We would throttle down that MS os because the only time it would be used is in time of disaster. Ie, the production crystal server just blew chunks. At that time, it would become a production server until we could get back up and running. I guese what I'm asking is that you qualify further your problems using VM in a production environment. What problems did you have? Dan

LocoLobo
LocoLobo

about the hospital example in the article. I don't quite see it. First the author talks about not purchasing a server, then he talks about doing away with the workstations. I realize this is out of context a little, but I would like clarification on how he set up the hospital. What hardware, software did he use? What is the base OS on the "bare metal"? How is performance affected? I've never used VM. Taking a SQL Server course at the local community college we used Virtual PC. It is a good teaching tool and if you back up your virtual file you can easily restore it in case of problems. Where I work my boss turned down virtualization for now.

Bill Elmore
Bill Elmore

The physical workstations and servers are replaced/built as virtual machines running on the host ESX server farm... Hope that helps to clear it up!

LocoLobo
LocoLobo

that makes it easier to visualize. I'll look up the ESX Server, sounds cool. What do you replace the workstations with?

Bill Elmore
Bill Elmore

Think bigger. When I say replacing workstations and smaller scale servers to host as virtual machines, I mean using an enterprise class server with 32GB RAM, eight-way multi-processing and a SAN for storage. VMware's ESX Server is perfect for this scenario and can easily support 20+ production VMs from one host. We currently have more than 15 ESX Servers at this point and will continue to purchase more as needed. Virtual PC is better for simple testing purposes, but not for hosting production applications/databases.

Editor's Picks