Disaster Recovery

Setting up a remote hot site

Our disaster recovery plan has migrated from a remote backup service to a remote hot site. We decided against using hosted services because of the ongoing monthly costs. In this update I detail the discovery process involved in setting up a remote hot site. The goal is instant fail over of e-mail and other services after a major disaster.

In my post last month on the pros and cons of remote data backup service, I wrote that our small business did not need continuous data protection, also known as high availability. I was wrong. After reviewing your comments on that post (thank you all very much) I met with the CEO to discuss our disaster recovery plan.

We discussed what will happen after the next earthquake strikes here in Southern California, and decided that at least two of our servers provide mission-critical services to the continuity of the business. The first is our Exchange Server and the second is our Flight Operations system (we are a small private charter jet business).

Remote Site Replication

One of the services provided by 4Service, the company that started me down this path of disaster recovery, is managed real-time replication for our critical servers to a secure remote data center. As I mentioned in the previous post, it is not inexpensive - about $3,300 a month. We decided to setup our own remote hot site.

In a comment from James, somewhere in London, I found the answer I needed: "For a full DR solution you need to be replicating at the application level to remote servers using SQL Log Shipping or for Exchange something like Neverfail or Double Take." We are evaluating these and this appliance from Data Domain.

Disaster Recovery is complex

This has turned into a major project. I can see how someone could easily specialize in advising businesses about disaster recovery. I have no desire to become a specialist. I just want to implement the best solution to provide my employer with instant fail over to backup servers in a remote location after the earthquake.

I have yet to discover if the MX record redirection can be automated. What if I can't get to a computer on the Internet after the earthquake? I am also beginning to realize that we will probably need a Terminal Server or Citrix Server at the remote site to host desktop sessions for critical employees in flight control operations.

An ongoing education process

I'll probably have to upgrade our Internet connection. A standard T1 is just not going to cut it if we're going to be continually sending data to the remote site as the records in the databases are updated. And what about software licensing? Am I going to have to buy more Exchange server and SQL Server licenses?

I'll keep you updated. The boss has already bought off that this is not an inexpensive project. He is expecting to buy at least two, maybe three new servers for the remote site. I'm thinking it will be at least $60,000 once we get all the components figured out. Being the Tech of all Trades makes for interesting work!

23 comments
danday3953
danday3953

WOW! I'm still new to TechRepublic, and to IT for that matter. BUT, I've been talking with my schools IT guy to evaluate some virtualization ideas, and I'm a little stumped! There seems to be an endless sea of virtualization tech products that are "far superior" to their compitition...Can someone disseminate VM in a nutshell?

drktech1
drktech1

Dont forget the people factor. I dont know who you intend to use this remote backup system in case of an earthquake, but dont forget the users and tech support. In an earthquake if your servers are not accessible then there is a good chance that all workers will not a building to work from. Chance is no internet access or phone line or power in the area either. Who will be using the system? From where will there access the system? Using what equipment? For example, you as the main IT guy have a family that has their house condemned are you going to work or provide for your family? I saw this situation with the Northridge earthquake, but it would apply any large natural disasters. In the case that I know, the building with the computers survived the earthquake, but the building that had the people did not. So, a remote computer facility would not have helped since there was no place for all the workers.

lesko
lesko

you can add a second MX record in DNS pointed somewhere else with a higher priority so mail will automatically fall here where your primary site fails

ron.e.wolf
ron.e.wolf

I read Tech Republic looking for expert advice and here we have a total novice wondering about the very basics of multi-site availability? So what is going on here? Why waste our time with this amateur dabbling? Yes, you need an expert. And you need a realistic budget. Multi-site operations are not quick, easy, or cheap.

BBPellet
BBPellet

Go virtual, yes you can have multiple applications run on the same physical box, the VMs share resources. VMWARE ESX is the best option and with VMotion you will have the ablilty so shift them from site to site with no user impact. I currently run a consulting firm that specialized in virtual enviroments and software as a service enviroments. Everything your trying to accomplish can be done in a VM and still handle 100,000s of emails and transactions, all depends on how beefy the host servers are or you can use server clusters that host the VMs if your current servers are underpowered.

tim
tim

We are evaluating a couple of hot fail over products for our Exchange Server and a couple of other critical servers. Read the post: http://blogs.techrepublic.com.com/techofalltrades/?p=150 I'm beginning to wonder if creating a new hot site may be overkill for our small organization. Maybe I should look into hosting my applications in someone's co-location farm. What do you think?

tmalonemcse
tmalonemcse

Thanks Ron, I agree with your assessment. As a small business tech manager, setting up remote hot fail-over sites this is not my area of expertise. I just thought it would be interesting to share my experience with others who might benefit from what I learn. I am employing the services of an outside network engineering firm and even they are taxed a little bit on this one. Apparently it is rarer than I thought to setup hot fail-overs to remote locations. For those who are interested, we are adding dedicated high-speed data circuits at each location. It is not wise to share existing internet connections. In other words, the data replication over a VPN won't work. We have also decided on a new HP server at the remote site running DoubleTake and VMWare. The server has two quad core processors, 16GB of RAM and 2TB of storage. It will be running three servers - Server 2008, Exchange 2007 and Terminal Services.

lesko
lesko

I thought this site is all about sharing ideas and mentoring rookie techs and maybe learn new, more effective methods

lesko
lesko

so is MS Virtual server if you just want to get your feet wet on it. Also look at Platespin for some Physical to virtual to Physical conversion etc.

mike
mike

I also run at my shop a similar implemtentation. We did not want to step up to 64bit architecture w/exchange 2008 so we hosted with Intermedia. Everything works like a charm, sync, backberry, goodlink, mail management, archiving too. better tqh 99% uptime (thats what they tout). Inside we run a virtual environment and cacn move it in 10 minutes. Thats a great failover!

me19562
me19562

Personally I implemented a Hot DR using DoubleTake and I worked as suppose. CA has the XOsoft software that does the same. The best probably will be the one with the best support and the cheapest price. All of them support more than MS Exchange, you can replicate SQL, SharePoint, Oracle, IIS, File and Print (including NAS), and more. Something that you got to keep in mind is that if your network is a Windows AD network you need a Domain Controller that is a Global Catalog and DNS in the Hot DR and after the failover to the DR you should move the FSMO roles to the DC in the DR site if the DR will be operating for a long period of time due to the disaster. Something else to keep in mind will be your DNS infrastructure. If you host your external domain name you'll need to have a slave either in the DR or host somewhere so you don't loose your Internet presence. The MX record works flawlessly if DNS is working. Just add an MX record with a lower priority pointing to the mail server in the DR. Also since the DR will have Internet connectivity you'll need a Firewall to protect that infrastructure.

OZ_tony
OZ_tony

We tried hosted Exchange and it was a disaster as the selected service had very poor S P A M filtering compared to our XWALL with my MS Exchange setup. But as an alternative MS has a Hosted Continuity Service for MS Exchange which few seem to know about. I spent ages looking for a supplier and found one in Canada (none in Australia) We have not implemented for business reasons but you may like to look at: http://www.epidirect.com/MS/EHS/HContinuity.htm

SKDTech
SKDTech

Please forgive if this is a naive suggestion, I am new into the professional IT field so there are a few things I don't have the knowledge base to make trustworthy suggestions on. Instead of buying multiple servers for a hot backup/fail-over site from what I understand if you are willing to lose some potential speed in favor of saving some money virtualization could be the ticket. A backup site isn't meant to be permanent only to keep everything running until the main site is back up and running, at least that is my understanding of it.

me19562
me19562

I don't completely agree with the "the data replication over a VPN won't work". Definitely the best best practices is dedicated circuits. The fact that replication over VPN will work or not will depend on different factors like your internet connection, distance to DR site, etc. The DR implementation that I did was using a VPN for the replication and worked without problems. Our connection to the cloud was a 3Mbps, the DR was at about 35 miles and with a direct routing path(The DR is the NAP of Americas and our connection go through the NAP). BTW if you haven't buy anything yet go and check the replication software from Visioncore. vReplicator ? Host-level Image Replication for Virtualized Environments http://www.vizioncore.com/vReplicator.html

ron.e.wolf
ron.e.wolf

You're welcome and I apologize if I came across too strong. I may have (likely) mis-understood the purpose, intent, gestalt of Tech Republic. I thought it was a place were expert opinion was offered. But I can see how it is also quite useful as a place to share experience (expert or not) and learn from one another. Another way to satisfy your standby needs would be to utilize one of the cloud computing offerings (for instance AWS or one of the MS specific clouds). The rapid scale up with low standby costs of cloud computing lend themselves to hot/warm standby situations. I'd be happy to explain further.

The 'G-Man.'
The 'G-Man.'

I would love to have (and have highlighted the benefits) this kind of service available but the $/? around here will not allow it. A dedicated high speed circuit costs a fortune.

The 'G-Man.'
The 'G-Man.'

does not exist. Move in 10 mins including information stores?

The 'G-Man.'
The 'G-Man.'

With an added filter! Still I can see a use for it. Can you get longer (than 30 day) restore time?

tim
tim

I have considered it and am still considering it. You are correct that a fail-over site is only meant to be temporary from a few hours to a few days. The big question is if the Terminal Server and the Exchange Server can exist on the same physical server. Will it support two dozen remote access sessions while still keeping up with the 100,000 pieces of email that we receive each day? And what about my spam filter and virus filter that I currently run on a separate gateway server? Ahiee...this is getting more and more complicated. Thanks for the excellent suggestion on using virtual servers.

S,David
S,David

Remember, you can strap together multiple T1 circuits to increase bandwidth, or go to a fractional T3. Those offerings can be used with a telco provided failover to reroute your service to the hot site, no DNS record changes needed. I don't know if cable can do that. They can't here, or they are not telling anyone if they can. Also, even if you replicate all your equipment, how will your people access it? If your main internet goes out from a natural disaster, it will most likely be out over a wide area. How far will your users have to travel before they can get remote access to the hot site? Or, will they have to go to the hot site?

tmalonemcse
tmalonemcse

I just received an invite from LiveOffice.com and was reviewing their pricing. Most employee mailboxes would fit the $10.50 a month model (under 1GB), but I wonder if they could accommodate some of my executives mailboxes (5-7GB). Looks like they could handle ActiveSync but I have just as many employees on the BlackBerry server and some on GoodLink. And yes, I agree it is time to look at alternatives to the T1. I just switched from 3Mbs DSL to 10Mbs Cable at home. I asked the installer about business class and he advised that it is twice the price for the same thing just to get a few static IP addresses. Cost is not the issue. I'm concerned about service level guarantees. Thanks for the suggestion.

mlansing
mlansing

I am in SoCal too and the prospect of "the big one" is daunting. Given you are a small - medium size organization, you may want to give hosted Exchange a serious look, that would solve half your problem, and the costs have come down a lot. Also definitely look into alternatives to the venerable T1, which is getting long in the tooth -- business class cable broadband is worth a look.

Editor's Picks