I think we are finally beyond the conversation on the utility of public cloud in the enterprise. Public cloud service providers have demonstrated that public cloud is just as reliable, if not more so, than on-premises infrastructures. Security isn't as big of a concern either. Google Compute Engine networking, for example, has a zero trust approach to instance security. It makes sense to build new applications for cloud infrastructures. However, the next logical question is which applications are candidates for migration and which are not.
There are a wide variety of applications hosted in the cloud. It's difficult to say, in general, that a particular type of application is suited for the cloud while another isn't. For starters, one way to examine your application's candidacy for a cloud infrastructure is to weigh the differences in cloud infrastructure with your on-premises solution.
1. Virtualized infrastructure
A fundamental question to ask is how a workload gets to the cloud in the first place. Taking Google Compute Engine as an example, Google allows the import of RAW disk images and VirtualBox images. Third parties are also available to assist in migration to the cloud. Another option is to select a pre-built OS image and re-install the application from binaries and replicate the data set.
From a practical perspective, we are talking about basic x86 virtualization. The migration is essentially a virtual-to-virtual (V2V) migration. So, from a pure technology standpoint, if your application is currently running on a virtual infrastructure, it's a good candidate for the cloud. However, physical migration of the application is a minor consideration.
Applications refactored for the cloud take into account the availability of the underlying infrastructure. Cloud-native application design assumes various components of the infrastructure will fail. It's important to consider the availability requirements of an application before cloud migration. If the application requires Five 9s (99.999%) of uptime, and you've built a redundant infrastructure to support that uptime requirement, you should think carefully before migrating it to the cloud. Google Compute Engine's SLA is 99.95%. It would require refactoring the application for the less reliable infrastructure before migration.
Software licensing is a potential show stopper. It has taken some software providers years to come to terms with the ubiquity of virtualized environments. Some software vendors still have punitive licensing models for virtualized environments.
Cloud infrastructures are experiencing a similar transition. Some specialty OSs or software may require additional licensing for cloud. Some solutions may not even offer a cloud option. A good place to start looking is the database layer. Discuss the database engine you are leveraging and if it has licensing for your target cloud provider. Another option is to leverage the database service offered by your cloud provider.
Similar to licensing, software vendor support is another critical, non-technical consideration. Many high-performance or business critical applications have strict requirements around the infrastructure supported. For example, some in-memory analytic platforms will place strict requirements on the memory and CPU combinations, as well as the storage configurations supported. While the solution may technically work in the public cloud, the software provider may offer limited support, or none at all, when issues occur.
It's great to look at integrated solutions from cloud providers and traditional vendors. An example is the AWS SAP solution. AWS recently announced a 4TB scale up instance for HANA with full support.
5. Data locality
A final consideration is regulatory issues. Engineers need to consider regulations around data before making the move to the cloud. Most cloud providers have gone through the standard procedures of getting their physical infrastructure and processes certified for compliance rules such as those for the Payment Card Industry (PCI).
Still, data locality is sometimes a challenge. Some governments require that certain types of data remain within their borders. If data location is a consideration, don't stop at the validation of a primary workload's location. Failover is a consideration as well. If a provider has only a single region within your country, you've created a single point of failure due to regulatory requirements.
So, what applications are a candidate for cloud and what applications should remain within the walls of your data center? As a management consultant, my answer is: "it depends." From a pure technology perspective, most applications that run in a virtualized environment will run in the cloud, but you must consider uptime, licensing, support, and regulatory requirements as well.
- Want to avoid cloud lock-in? It's about the database (TechRepublic)
- Amazon launches Elastic File System to bring scalable storage to the cloud (TechRepublic)
- Developers will cheat on their cloud provider for better security, new survey says (TechRepublic)
- RightScale can help you pick out the right public cloud (ZDNet)
- AWS adds Mumbai as sixth Asia-Pacific region (ZDNet)
Keith Townsend is a technology management consultant with more than 15 years of related experience designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and a MS in information technology from DePaul University.