By Darrell Riddle, senior director of product marketing for FalconStor.

Disaster recovery (DR), business continuity (BC) and data protection continue to be IT priorities and are among the top areas for IT spending in 2012, according to industry analysts. Despite tight budgets, companies of all sizes are increasing their spending on data protection solutions. Data deduplication and virtual server protection, which allow companies to improve their backup windows, gain faster restore and recovery times, and integrate seamlessly with existing backup applications, are the key drivers of this spending. Why is this such a focus for IT right now? There are four reasons driving the move toward better DR and BC.

#1 Data is growing at an incredible rate. Industry analysts cite that nearly 75 percent of existing data is redundant, which makes the tasks of performing daily backups unnecessarily difficult. Managing tape and restoring data can be cumbersome, unreliable, time-consuming and costly. IT managers want to accelerate backup times, minimize failed backups, improve recovery time objectives (RTO), and reduce the time, cost and complexity of doing this work.
#2 The business case is clear. Information is more than doubling every year. This has been the case for quite some time now and doesn’t show any sign of slowing down. Additionally, the price tag for managing and protecting that data is rising in tandem. In an effort to keep up with data growth, industry analysts say about 51 percent of today’s organizational budgets are allocated to storage. Furthermore, companies are saving information for longer periods of time to meet compliance requirements. Yet, IT budgets are limited. IT executives agree that their budgets and staff resources are lower than they were a few years ago. How are these departments expected to do so much with less? The answer: they must get more efficient.
#3 Companies demand data assurance. Today’s technology helps IT meet availability requirements while reducing the backup window. By combining backup optimization and deduplication, IT can scale capacity to the backup target disk pool, conduct remote replication of deduplicated data to protect against disaster and build disk-to-disk-to-tape backup architectures around deduplication.
#4 The expectations for performance haven’t changed. RTO expectations and service level agreements (SLAs) aren’t getting looser because the data challenge is getting greater, in fact, most would say that they are growing. For some, downtime could be so expensive, that virtually any amount is unacceptable. Disk-based backup improves performance and reliability while reducing the administrative time required for managing the backup and DR.

Defining the ideal disaster recovery and business continuity solution

Once IT succeeds in making its case for better DR and BC, the next issue that arises is execution — how to do it and which components are essential to rein in data sprawl. Having less data to store significantly reduces storage capacity and related costs. Through a capacity-optimized global deduplication repository, organizations can retain several months of backup data on a deduplication appliance. A shared global deduplication repository should span multiple application sources, environments and storage protocols.

Other elements to look for include simplified implementation and deployment and the ability to leverage existing resources and processes. IT should insist on BC/DR options that improve SLAs in a measurable way. When the solution boosts corporate revenue by five percent to 15 percent every year, the return on investment (ROI) can be rapid and compelling. The improved uptime and availability of such a deployment also delivers measurable benefits in terms of total cost of ownership.

As IT evaluates its options, the savings can never be out of mind. For example, easier infrastructure management will save administrative time. A single-share global repository will reduce secondary storage requirements and save on additional costs. A more efficient backup environment and use of storage will save on recovery time and lost productivity. All of these metrics matter.

How business continuity can reduce risk

Too often, IT reacts to data growth by deploying multiple instances of deduplication storage, creating silos of underutilized capacity and destroying efficiency. These stand-alone appliances cannot extend the deduplication index across multiple physical nodes, so data deduplication occurs separately within each appliance. The result is more duplication, which increases capacity requirements and costs from 25 percent to 100 percent.

Getting from 99 percent to 100 percent uptime can be taxing. In the meantime, to meet today’s demanding RTOs and SLAs, 99 percent uptime is not nearly enough. Once the industry standard for server uptime, 99 percent translates into 87.6 hours of downtime per server/per year. That reality is too expensive – in too many ways – for today’s businesses. Enterprise environments now require 99.9 percent or 99.99 percent of availability and no more than 8.76 hours of downtime per year.

These are the challenges IT faces. Executives can cut extraneous spending while meeting increased performance expectation by making the case for DR/BC solutions that optimize deduplication, scalability and efficiency. By investing in complete, fully integrated appliances, IT can meet the challenges of performance, capacity and cost that come with a growing data reality.

Darrell Riddle, senior director of product marketing for FalconStor, is a software professional with more than 20 years of experience in product management, marketing, field enablement, product integration activities, engineering, quality assurance and IT management. Darrell has an extensive understanding of technical and business aspects, including lifecycle, pricing, licensing and go-to-market strategies of software products. Prior to joining FalconStor, Darrell worked at Symantec.