As part of my operational readiness preparation, I want to make sure my Internet service is recoverable. If any component is damaged or lost, I want to be able to replace it.
This is not quite the same thing as resilience. Resilience covers the ability to cope with damage and recovery is about repairing that damage. If a critical component gets broken, I want the customer service to limp along as best it can. When that component is fixed, I want my service to return to normal instantly.
Since I committed my service to the cloud, I don’t need to worry about recovering from any physical problems such as hardware or network failure. There is no data extraction from a failed disk, no reboot time and no engineer onsite on a disturbing hourly rate. I only have to worry about recovering the software: my server tuning configuration, my web applications, and most importantly, my business data.
So my ability to recover rests on regular backups and restore tests. Different levels of the technical stack, from the virtual hardware to the business applications, require different approaches to backup and recovery.
- Application-level — Copy the product part, such as a database content
- OS-level — Copy the whole enterprise application – machine, configuration, product and report
- VM-level — Copy the whole image
Here, I look at the application content and later, I’ll look at the traditional OS and new virtual machine copying.
Application-level backup
Most people focus on content. Anyone who has lost an important document knows the importance of this kind of backup.
The content of my fresh Drupal install is tiny. A dump of the database is about a megabyte. The site content at /var/www/html/sites/ is not even 100K. Since many of these files are text, they will compress to the size of a pea.
I can make a remote backup of my Drupal application’s content with these commands.
This is the shorthand version. It misses out what the prompt looks like, any messages the OS displays, and what to do when things go wrong. You have to know a little about what you’re doing before following this procedure. It is also a simple one-command-per-line, manual, one-box approach. If you know a lot about what you’re doing, you will see a better way to do it.
- Open a CLI on the EC2 machine.
- Create a compressed archive file:
mkdir backup
sudo cp -rp /var/www/html/sites/ backup/
mysqldump -u drupal7user -p drupal7db > backup/drupal7db-dump.sql
(password is written in /var/www/html/sites/default/settings.php)
tar cf backup.tar backup
gzip backup.tar
rm -rf backup/*
- Close the CLI. I now have a backup file stored locally on the EC2 machine. I better get it off there.
- Open a CLI on my local machine.
- Move the file from my remote EC2 machine to my local machine:
pscp ec2-user@ec2-1-2-3-4.eu-west-1.compute.amazonaws.com:backup.tar.gz .
ssh ec2-user@ec2-1-2-3-4.eu-west-1.compute.amazonaws.com rm backup.tar.gz
- Close the CLI.
If this backup file contains sensitive information I can’t leave it on a steal-able USB stick. I could password-protect this file, copy it to a data vault, or maybe encrypt my entire disk.
Application-level recovery
If I have a disaster I can recover the information like this.
Stop anyone using the website while I overwrite files.
- Open a web browser.
- Browse to my Drupal site.
- Log in as the system administrator.
- Put my Drupal site into maintenance mode. Configuration | DEVELOPMENT | Maintenance mode | Put site into maintenance mode | Save configuration.
- Copy the file.
- Open a CLI on my local machine.
- Copy the file from my local machine to my remote EC2 machine.
pscp backup.tar.gz ec2-user@ec2-1-2-3-4.eu-west-1.compute.amazonaws.com: - Close the CLI.
- Replace content.
- Open a CLI on the EC2 machine.
- Replace files.
gunzip backup.tar.gz
tar xf backup.tar
mysql -u drupal7user -p drupal7db < backup/drupal7db-dump.sql
sudo cp -rp backup/sites/ /var/www/html/
rm -rf backup backup.tar
- Close the CLI.
- Let customers back in.
- Remove my Drupal site from maintenance mode.
- Close the web browser.