Each time we bring up cloud technologies there are scores of discussions and concerns about the transport of data or other IT assets out of an organization's datacenter. If it never leaves your datacenter, does that seem like a better approach? Even at the recent TechRepublic event, “Changing the Face of IT”, Editor in Chief Jason Hiner stated that one of the top trends in IT is cloud technology, yet IT professionals seem to be more comfortable with a private cloud. Ironically, the private cloud seems to look more like a traditional datacenter. So what is the difference between a traditional SAN or storage administration practice and a private cloud storage solution?
There are hybrid solutions such as the Nasuni filer or Nirvanix hNode to tier storage between your on-premises storage products and public clouds. But, those still use some amount of public cloud storage solutions. Now there is an alternative to bring it all internal in terms of where the data resides, yet you can get the scalability and ease of access associated with public cloud storage technologies.
We can now enjoy pay-per-use models for private cloud storage solutions. One way is with Hitachi Cloud Service for File Tiering from Hitachi Data Systems (HDS); this was previewed to me recently at HDS Geek Day and may appeal to IT administrators as all of the storage equipment resides in your own datacenter.
Cloud Service for File Tiering will move data off of primary network attached storage (NAS) resources to private cloud resources within your network and firewall. Now, in this situation you may think that there would be an initial cost to put the equipment in place for the initial footprint. The HDS solution addresses this by not requiring an upfront purchase of the private cloud storage component, as it is still owned by HDS. Instead, the data that is offloaded from the customer’s primary storage resources. Make no mistake, HDS is a storage company. This solution is built on their existing products such as Hitachi Content Platform (HCP), which is a content-based storage product that can be accessed through the de-facto cloud storage protocol Representational state transfer (REST).
Once the archive data is moved to the private cloud, primary storage consumption is reduced. This data isn't available as a drive letter or UNC path, but as an object store data that is checked-in or retrieved by the storage management system.
Unstructured data is a boon for migration to the cloud, whether it is a private or public cloud storage solution. This can be anything from file servers that are out of control, archives of an application that are not in database form, SharePoint sites or more. Unstructured data is the perfect private cloud candidate in a sense because it is a large collection that can logically be moved via automation quite easily. File servers, for example are the best candidate as most data is not accessed on file servers yet it is tough to get the buy in to delete it.
If the data does not leave your four walls, does that make pay-as-you go models with cloud technologies more attractive? Share your comments below.
Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.