Some cloud migrations are undertaken with less than prudent planning. Using a public cloud provider because current workloads exceed the capability of on-premises or co-located hardware—simply dumping the workload on AWS because “CapEx is difficult” is roughly the enterprise equivalent to putting a bandage over a tumor.

Pushing data back and forth between public cloud and on-premises compute resources, or (under certain circumstances) between regions can easily result in lengthy transfer bills, though even cloud-native applications are subject to these issues, as the amount budgeted for cloud is substantively less than what the cloud-hosted application actually costs to operate, according to Ben Niernberg, senior vice president of sales and services at MNJ Technologies.

SEE: IT budgeting: How to do it right (free PDF) (TechRepublic)

“Every company we work with says that, because what you have is anywhere between 20 to 28-year-old developers developing into Layer 7 applications in the cloud that have no idea of the cost,” Niernberg said. “Companies are looking to put massive amounts of data analytics out of all of these apps and workloads; they don’t think about what it takes to take all of that back out of the cloud, so that they can process it.”

Between AWS, Azure, and Google Cloud Platform, there’s not one provider that does a better or worse job of billing structures, according to Niernberg. “I don’t think anyone’s purposely doing it that way. [Nobody] is calling you and saying, “you’ve got too many servers turned up,” that’s just not the way they function. It’s how you work with it, your ability to understand where your costs are.”

Determining what applications should be run from on-premises hardware, and what should be run from public cloud platforms, is the first step in fixing your cloud. “We always start with ‘What’s the purpose of the workload?’,” Niernberg said.

Working with software-as-a-service (SaaS) vendors is a rather different prospect from simply running VMs in the cloud. “We have a client that is moving to Salesforce, and they have an inordinate amount of legacy information. The biggest issue they had was the cost of moving and housing all of that data,” Niernberg said. “They needed Salesforce to have a direct connect into a data center or colocation facility where they housed all their information. Then, it was how quickly those devices talk to each other, to get the polls for salespeople that needed the information.”

For more, check out “You can deploy Wi-Fi 6 now, but benefits of 5G could be years away for your organization” at TechRepublic.


Image: Getty Images/iStockphoto

Subscribe to the Cloud Insider Newsletter

This is your go-to resource for the latest news and tips on the following topics and more, XaaS, AWS, Microsoft Azure, DevOps, virtualization, the hybrid cloud, and cloud security. Delivered Mondays and Wednesdays

Subscribe to the Cloud Insider Newsletter

This is your go-to resource for the latest news and tips on the following topics and more, XaaS, AWS, Microsoft Azure, DevOps, virtualization, the hybrid cloud, and cloud security. Delivered Mondays and Wednesdays