Traditional disaster recovery (DR) options range from off-site data replication to a duplicate private data center. Hybrid DR lies somewhere in between, using a public cloud to make DR easier to manage and quicker to implement.
The myth of the duplicate data center
No organization wants to leave the data center as a single point of failure. An organization is dependent on its IT, so the leaders make sure some kind of disaster recovery process is in place. Ideally, an organization's entire data center is duplicated: one live center and one backup. Here's the way the technical work of the DR process is expected to happen.
The organization runs two data centers: one live and one duplicate. All the hardware and networking in that second data center is identical to the first. The racks are all powered, the VLANs are all in place, and the server specs are identical. All the operating systems in that second data center are the same versions, at the same patch levels; all the business software is updated just as frequently as the first. No configuration tweaks have been forgotten, no accounts are missing, and no third-party integration is blocked. It is all tested regularly and works flawlessly. In the event of a disaster, a flick of the big BGP switch instantly reroutes all traffic to the duplicate data center.
No, I've never seen that either. A duplicate data center is outrageously expensive to create and practically impossible to maintain. Customers don't notice whether an organization has a DR site, so the work is often pushed behind the jobs that add obvious business value. The vague threat of lost revenue due to outage is not a DR program motivator.
That means the duplicate data center -- assuming there is one -- doesn't work. The hardware is old and underpowered -- the standby systems are last year's decommissioned kit. The software is starved of the regular automated deployments and the sysadmin love they need to stay current. That one fateful day, when the cable company apologizes for cutting all the fibre, or a volcano pops up in the car park, or a chemical tanker drives through the lobby, the backup data center will fall over instead of stepping up.
Hybrid DR solutions
In the real world, companies realized decades ago that running a duplicate data center was going to be as much fun as stocking the toilet paper dispensers with dollar bills. Many alternatives are used, including off-site data replication to complex DR programs.
For those who have made the jump to public cloud computing, this DR headache has largely cleared up, because the DR infrastructure work has been offloaded to their supplier. Organizations renting private hosting from a cloud supplier like Rackspace also get this benefit.
The rest of us -- the organizations with the on-premise virtualized networks handling the regular workloads -- still need off-site DR solutions and can use cloud vendors to provide them. One of the promises of hybrid computing is a variation on multi-site network duplication. On-site machines are imaged and backed up to public cloud storage. If a disaster happens, these images are spun up in the public cloud.
These types of hybrid DR solutions are getting easier to use. VMware offers the vCloud Hybrid Service, allowing its customers to use the VMware cloud for DR. HotLink, the company with the Hybrid Express product for cloud bursting, also provides DR Express; this allows VMware customers to use AWS for their DR program.
How ready is hybrid DR?
Duplicating the traditional data center was never going to work well. Even if a perfect copy could be made, business critical data is not in one central location -- it's spread all around the network. Data lives on mobile phones, in branch offices, and in partner networks.
Better choices for DR have been needed for a long time. DR is like insurance -- an organization should be able to buy the solution that matches its tolerances. The hybrid DR options open up the market.
If hybrid DR is done right, there should be a current set of images from the organization's network ready to spin up. It should be fast and easy. Hybrid DR will probably be cheaper than running a second data center, though it's never going to be a bargain.
Nick Hardiman builds and maintains the infrastructure required to run Internet services. Nick deals with the lower layers of the Internet - the machines, networks, operating systems, and applications. Nick's job stops there, and he hands over to the designers and developers who build the top layer that customers use.