It’s amazing
how things go full-circle. In the early days of computing, mainframes had
centralized storage, partitions for different processes, and a number of other features
that are making their way back into mainstream environments. In the 1980s and 1990s,
distributed, client/server computing made its way into just about every
company. Along with this multiplication of computers, servers proliferated, and
organizations bought new server hardware each time a new service was required.

While
buying new server hardware for each solution keeps services separated and
prevents conflicts, this “server creep” can quickly get very expensive in both the
initial purchase cost and in maintenance costs. The end result is frequently
hardware that runs at a fraction of its capability. Further, this scenario
complicates related matters, including disaster recovery (DR). While it would
be nice for a DR plan to be able to fully cover all services, this one-to-one
backup strategy isn’t very feasible or affordable, and many organizations
choose instead to cover only critical services.

New storage technologies and server virtualization can improve your DR
strategy

It’s
well-known that server virtualization products like VMware and Microsoft
Virtual Server
can help reduce server creep and better utilize server
processing resources. But, when coupled with newer, relatively inexpensive
storage resources, these server virtualization products become a formidable
combination that can keep costs low, provide a robust environment upon which
your users can rely, and can have an incredibly positive impact on your DR
plan.

Tips in your inbox

TechRepublic’s free Storage NetNote newsletter is designed to help you manage the critical data in your enterprise.

Automatically sign up today!

Consider
this: Suppose
you purchase an iSCSI-based storage device
and create the appropriate
storage infrastructure, complete with full redundancy. iSCSI, when compared to
Fibre Channel solutions of similar capacity, is significantly less expensive
and much easier to deploy. While iSCSI bandwidth is more limited than Fibre
Channel, iSCSI’s capability to use multiple data paths through MPIO (multi-path
I/O) can bring iSCSI’s capabilities on par with Fibre Channel in small- to
medium-size deployments, potentially making it a suitable target for hosting
virtual machines. So, under this scenario, you could conceivably begin to
virtualize your servers, running them directly from your SAN.

One benefit
of many iSCSI
SANs
is their inclusion, at no additional charge, of software that
replicates the contents of one array to a replicated array. Now, you can
easily, and relatively inexpensively, replicate all of your data to a different
location, possibly in a different building. As long as it’s physically separate
from the main data center, your data should be safe from most disasters. Now,
assume that you are replicating the array on which you host virtual machines.
If, in that second data center, you create a smaller server farm with your
virtual software of choice (VMware or Microsoft, generally, although there are
others out there like the open-source Xen), you could run your entire operation
from this more limited set of servers. Why a limited set of servers? In your DR
center, you don’t necessarily need to be at 100 percent capacity, although you
could opt for one-to-one redundancy. I make this distinction only as a way to
keep the cost for a DR center lower to make it easier to justify to upper
management.

On the DR
center servers, load your virtual machine software, and keep those systems
available so that you can quickly attach your SAN-based virtual machines to the
virtual hosts in the event of a disaster in the primary data center. Voila!
Instant DR! (Ok, maybe not instant.)

I want to
be clear that I have not yet implemented this scenario in my organization. I
will be setting this up in my lab and fully testing it and will report back on
exactly how I achieve the redundancy and what pitfalls I run into.