For many situations, the move to public cloud storage is a good use case to address abstraction and off-site storage. This could be for situations such as data protection, content delivery, and large file exchange. Simple cost savings with public cloud storage aren’t usually a leading discussion point however.
The issue with determining if cloud storage is less expensive is to first of all match an internal price model with that of the public cloud storage systems. When I used to be an infrastructure manager, one of the biggest challenges in sorting out different costs was equating one price model to another. I used to have this expression, “Because this store doesn’t sell oranges, I need to make this apple look like an orange.”
The fact is that it is more than just an acquisition cost of a storage system compared to one month’s worth of storage. There are a number of ways to tackle this problem — the first is to establish some metrics for inside your own data centers. I have heard of the “cost per floor tile” data center facility cost being used. Another would be an operational cost per server or storage system per month. These costs would then need to be extended to a monthly model; taking into account operational spikes around deployment or decommissioning. This is one example of what I mean by making an apple look like an orange.
That being said, the takeaway is having a way of making your costs for in-house storage ‘look like” that of a cloud storage model. Once that is established, the comparison possibly becomes a bit clearer. There are still some intangibles such as bandwidth, lack of control, and performance that may be harder to assign a cost to; so keep that in mind.
I still like the data protection use case for public cloud storage as the leading use case. In this use case, I like to address the 1-2-3 approach. Divide all data protection requirements into three categories, with the first category being the most critical. Ideally, this is communication and authentication services (email and Active Directory for example); the second category in this approach contains critical applications; and the last is everything else that may not really be needed in a true disaster.
To put a cost around the absolute abstraction that cloud storage provides would be difficult. But if you need it, it becomes a priceless recovery option. So, if we have a 2 Terabyte model it would cost $155.65 per month in US-West and US-East standard Reduced Redundancy Storage on Amazon S3 storage. At that point, you may as well treat yourself to the standard storage option which would run $194.56 per month for the same 2 Terabytes. Over three years, that is over $7,000 to keep 2 Terabytes in the public storage cloud. Most on-premise storage systems would cost less, but in the disaster recovery use case the abstraction that cloud storage brings is priceless. But how much power, cooling, and operational expense would be avoided (says the orange apple)?
With all of that being said; the hard part is still becoming familiar with the process. Details such as key management for encryption, knowing how to deploy onto a totally new infrastructure, and setting key expectations for the performance of a recovery are critical. Especially if Glacier storage is considered as part of a recovery plan, which has a lower cost yet slower retrieval time among other differences.
While the 2 Terabyte example is small, its impact is huge. For a small business, that may be an ideal footprint for cloud storage utilized for disaster recovery. Does that make sense for your organization, or part of your infrastructure? Share your comments below.