Today’s fast-paced business environment demands high availability systems, minimal downtime, and 24/7 access and technical support. Recovery time from a hardware or software failure needs to be short. “In order to recover in that short period of time with very large files and databases, you have to either mirror or shadow your data,” explained Gartner analyst Donna Scott. (TechRepublic is an independent subsidiary of Gartner.)
If you advise clients on their disaster planning efforts, they must understand that data mirroring is a central component of such a plan. In this article, we will address:
- What is data mirroring?
- Why is it so expensive?
- How to overcome client objection to the cost of data mirroring.
- Why data mirroring should be part of an overall redundancy plan.
How data mirroring works
Newton’s Telecom Dictionary defines “mirroring” as “a fault tolerance in which a backup data device maintains data identical to that on the primary device and can replace the primary device if it fails.” Mirroring is a synchronous replication process, meaning that when information is written to the primary disk, it is also simultaneously written to a secondary disk so that you won’t lose any completed transactions in the event of disk failure, Scott said. (Shadowing, on the other hand, is an asynchronous process—the data is duplicated, but not at the same time, and usually with some sort of lag time built into the data transfer, she added.)
The main benefit of mirroring your Web site is redundancy and fail over, both of which are important if you’re doing a high volume of transactions on your Web site, said Charles Hankle, manager of consulting services at Pittsburgh-based Liekar Strategic Solutions.
There are multiple ways to mirror a site, ranging from disk clustering to server farms. Hankle noted that regardless of the technology you use, high availability is essential, so you need to have a system that you can either access immediately or within a short period of time after a disaster.
Disk mirroring=RAID 1
Disk mirroring is a form of Redundant Array of Inexpensive Disks (RAID) Level 1. Computer data is divided by writing data to two or more drives, thus allowing for redundancy and data recovery upon disk failure. Data tends to be written more slowly (since it’s actually being written twice or more) but can be accessed more quickly.
An expensive investment
The biggest objection to data mirroring is the cost, said Gartner’s Scott. The shorter the time period in which you want to recover data, the more it costs. “Most everybody wants to recover in short periods of time, but most businesses’ managers don’t want to pay for it,” said Scott.
But industries that process large numbers of financial transactions—such as financial services companies or trading companies—tend to spend money on mirroring because even a few minutes of downtime is costly, she said.
The high price tag does, however, frighten some companies away, said Hankle. “I see a lot of clients that look into data mirroring initially and then steer to another direction that is a step down from that, like having a backup machine or a server on standby that’s used for something else in the meantime,” he said.
These clients choose to take that couple of hours of downtime over spending the money for high availability. Hankle said companies who do 50 percent or less of their business online tend not to want to spend money on this kind of Web redundancy.
Hankle also added that another reason some companies may not choose to mirror their data is that maintaining such a redundancy requires special training and skills on the part of the database administrator. “These databases need to be maintained and monitored differently. Even the initial installation and setup needs to be done differently than traditional databases,” he said.
Additional disaster planning resources
- Sun StorEdge Network Data Replicator Overview
- Network Storage Solutions white paper: “Planning Your Backup Architecture"
Overcoming cost objections
Hankle said that choosing to spend the money on mirroring a Web site comes down to a client’s business analysis decision. Clients must ask themselves, “What is the risk potential of possible downtime vs. the cost of the hardware?”
This is particularly painful, he said, when a client declines to mirror their Web site only to experience the cost on the back end when the site is down for two days. “Not only do they lose money but also customers who won’t come back,” he said. “The big mistake is not doing the full cost analysis.”
Consulting firms should go through a business-impact assessment for their clients, said Gartner’s Scott. In addition to direct hard dollars lost, Scott said, companies must also look at other, more intangible costs, like compensatory damage and legal issues.
Nik Simon, product marketing manager of Fort Lauderdale, FL-based DataCore Software, said that for many consulting clients, it’s a question of pointing out the cost of downtime vs. the cost of a solution. “If it’s not clear to them from the numbers, either you’re not presenting it correctly or they don’t need it,” he said. “It should be so compelling that it’s a no-brainer sale for them.”
Disaster recovery and server architecture are among the many IT issues analysts and participants will discuss at Gartner's Spring Symposium/ITxpo, in Denver, May 7-10, 2001. Workshops on these topics include:
- Managing end-to-end IT services
- Enterprise server selection
- Sun Microsystems: Gartner’s view
- Surviving in a 24-hour world
- Windows 2000 and Intel servers
- Best practices in business continuity planning
Don’t wait for the unthinkable
One common mistake that Matthew Barnes, director of network development and security at Atlantic Beach, FL-based bgfx.com sees clients make is only committing to their redundancy plans halfway. “You have to go all the way though the process,” said Barnes. “You may not decide to implement all the ideas or fixes, but you darn sure better know where your single points of failure are.”
If you know where your weaknesses are, you can systematically eliminate them as money and resources permit. But if you haven’t studied the issue, Barnes said, you may be throwing good money after bad.
Update your redundancy plan whenever you think you‘ve found a weakness—and update it again once you’ve found the fix for a weakness, he explained. Of course, you’ll also need to update your plan whenever you add new equipment, software, people, offices, or make other major changes within the company.