In many of my blogs, one may think my primary goal is to have a very clear handoff from application owners to infrastructure administrators. While there should be a smooth transition from these two groups, one hand needs to know about the other. If systems that don’t need to be backed up are being backed up, this can lead to incredible wastes in licensing, bandwidth, storage and precious time during backup windows. There are no steadfast criteria that will apply to all organizations on what does or does not need to be backed up. There is an entirely separate discussion about what tools or processes make up backup processes today or as it's more appropriately called, data protection.
As a general starting point, I’ve collected a few criteria to determine if a system does not need to be backed up. This is only a springboard of ideas for you to consider against your requirements; each situation is unique, as we all know. Here are a few situations where a system may not need to be backed up:
- Transporting code and configuration
- Parallel working systems
- Development or test systems
- User data managed centrally
- Application data managed centrally
For some applications, the rebuild will always be quicker. In today’s world of automated deployments, it may just be quicker to re-create a system than perform a full recovery. A new build has a comforting sense of being a clean installation, whereas depending on the failure, a recovery may have system consistency risks associated with it. In my own experience I had a situation where the only thing unique about one system (amongst a number of similar ones) is an entry in an .INI file. With that, it may be better to keep the .INI file central instead of wasting the time and effort to back up an entire operating system.
If an application is configured in a pool where many systems are configured exactly the same, each system may not need to be backed up. A good example here is a pool of web servers serving exactly the same content. Other compensating controls such as central log management and central code repositories can make the need to back up all of the systems in a pool unnecessary.
If you provide test systems to developers or application owners, does this system need to be in the backup rotation? This may be a situation where an out-of-band data protection strategy can be used. For virtual machines, possibly a scheduled snapshot once a week would be an adequate level of protection (be sure to script the removal of the snapshot). If the developers and application owners manage source code in a code repository, there again may be a reason that the system may not need to be backed up.
If a solid document management strategy exists for configuration items such as My Documents, user data can be protected across all systems centrally. If a system functions only as a user interface (such as a Windows Terminal Server farm) and a redirected profile puts all of a user’s data centrally, all of the terminal servers may not need to be backed up.
Just like the user data subject, if an application’s relevant data goes to a centralized database server; it may be quicker and more comforting to rebuild the system in favor of a recovery. Each application behaves differently, so give consideration to what needs to be protected outside of the data.
These are just a few situations that I have come across over the years that can translate into dollars saved, backup windows less crowded, and less storage wasted.
What are some criteria you have for systems that do not require data protection? Share your comments below.
Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.