Moving applications to the cloud creates latency for applications accessing the data via a WAN link. Keith Townsend explores best practices to help you combat the impact of data gravity.
One of the most challenging technical hurdles in moving a core application to the cloud is the latency added by moving the data away from users or your data center. The challenge, commonly referred to as data gravity, is a concept coined by Dave McCory.
Data gravity describes the friction caused when transmitting data. The more data needing transmission at a distance, the more friction. In the example of moving a core application to the cloud, data gravity is created as non-cloud hosted applications have to access data hosted in a remote cloud data center. Here are some options to help you combat data gravity.
Data gravity is an appropriate term as data pulls compute closer to the mass of data. From a high level, there's not much complexity in solving the data gravity problem. A simplistic approach is just to move the compute resources that need the data closer to the data itself.
In multi-data center designs, data center managers place workloads closest to the data that is commonly accessed, minimizing the impact of latency. An application hosted in the cloud has the same considerations. The simplest technical solution is to host workloads requiring cloud-based data in the same cloud service.
For various reasons, this isn't always an option. An example hurdle is legacy operating systems such as Solaris or HP-UX. These OS options are not available on typical cloud services. If a batch process such as payroll pulls batch data across a WAN link and performance suffers, it can mean the death of a cloud option.
Another simple solution is to co-locate your non-cloud workloads in a Cloud Exchange. I had Switch executive vice president of development and evangelism, Mark Thiele, on a recent podcast, where he described the service.
Switch's Cloud Exchange is a value-add Switch offers to its cloud provider and enterprise customers hosting equipment in their data center. Switch provides the capability of running cross connects from customer equipment to cloud providers. The closer proximity eliminates the need for dedicated circuits between a cloud provider and a customer.
With connectivity provided via a simple cross connect, customers are not locked into a multi-year or multi-month WAN service contract. Thiele explained how customers can switch cloud providers month-to-month in the search for the right provider.
On the Datanauts podcast, Thiele claimed that customers leveraging their WAN services experience many of the same benefits as co-location. Customers establish a point of presence (POP) in the Switch data center and can take advantage of cross connects to Switch cloud partners. Thiele claimed that customers experience lower latency than direct cloud WAN connections.
Another option is to purchase on-premises cloud services, such as EMC's Virtual Stream. While the resources aren't pooled, Virtual Stream will offer to manage instances of enterprise applications such as SAP. Enterprises realize the benefits of outsourcing the management of both the infrastructure and application with a Virtual Stream managed private cloud. Since the data is local to the customer's data center, data gravity doesn't factor into application performance.
On-premises options do limit flexibility, though. Data center managers also have to account for facility maintenance as part of the agreement with on-premise private cloud options.
Will we see more Cloud Exchange solutions as demand for hybrid-cloud grows? Share your opinion in the comments section below.
- 3 best practices for SMBs embracing the cloud (TechRepublic)
- Microsoft cuts Azure prices as competition with AWS, Google heats up (TechRepublic)
- Cloud security market to be worth $12 billion by 2022, here's why (TechRepublic)
- VMware's five key cloud-native computing investments (TechRepublic)