Cloud

Cloud lessons: Five tips for firms moving to AWS

Companies that rely on computing infrastructure provided by Amazon Web Services share how to get the most from it.

iaas-2016-intro-header.jpg
Image: buchachon/Getty Images/iStockphoto

Firms are familiar with the upside of harnessing public cloud infrastructure, the ability to spin servers up and down without long-term costs.

But what strategies can companies adopt in order to get the most of public cloud platforms such as Amazon Web Services (AWS)?

At the recent AWS Enterprise Summit in London, customers of the platform shared their tips on how they are changing the way they work to better exploit the cloud.

Think about new opportunities for sharing data

Companies across the world make processors based on ARM's chip designs and the UK-based company regularly needs to share a huge amount of data with a large number of external firms.

One major shared dataset is 5TB in size and growing, according to Olly Stephens, lead architect for the engineering platform at ARM. That dataset is so large that ARM repeatedly experienced "multiple days of headache trying to get that data from our environment to our customer".

To work around this network bottleneck, ARM uploaded that dataset to Amazon's Simple Storage Service (S3). Customers can then download that data from AWS, with ARM controlling access using Amazon CloudFront with signed URLs.

"We can put a big fat pipe in between us and Amazon easily enough, so we can get the data into there," said Stephens.

"If the customer is still struggling to get that amount of data, he just has to bump his connection with Amazon up, which is much more compelling than trying to bump up a point-to-point connection with us.

"Basically [we're] piggybacking on someone else's backbone because it's better than ours."

Exploit variable pricing to save money

Testing designs for new chips requires considerable computing power. To meet this demand, ARM previously relied exclusively on its internal datacenters.

While running predictable workloads is generally cheaper to do in-house than via a cloud serivce provider, ARM has been able to lower its costs using AWS Spot Instance pricing.

Spot Instances let firms bid for spare compute capacity on Amazon's EC2 service. Rather than being a set price, the cost of Spot Instances rise and fall based on current demand. The one caveat is that Spot Instances are only suited to computing workloads that can tolerate periodic interruptions in service delivery.

Stephens gives the example of using AWS Spot Instances to run an overnight endurance test of a CPU design, due to complete by 8am the next day.

"I may have been able to do it for less money [than running it internally] because I was able to exploit Spot pricing."

Those savings free up money that could be used to complete the test earlier, to return to the kitty for later use, or to run additional tests, according to Stephens.

"This conversation is the one where the lightbulb moment occurs and they finally grasp the potential of the dynamic model," he said, adding that "the folk who run are internal estate are so invested in making it fit for purpose" that they can initially be "somewhat blind to the opportunities of refactoring some of this work" to run in the cloud.

Try out different technologies with less risk

One of the oft-repeated selling points of public cloud infrastructure is access to compute and storage for testing out new tech, without the long-term commitment of buying servers.

Currently experimenting with this flexibility is the UK Met Office, a weather and climate forecasting service that collects more than 300 million observations from around the globe that it uses to generate over four million forecasts.

Met Office developers have been testing web apps using different storage technologies running on top of AWS.

"The biggest benefit in our use of AWS has been the speed of delivery at almost every stage of development," said James Tomkins, chief enterprise architect at the Met Office.

"From the outset, our developers were able to access environments and infrastructure that enabled them to fail fast and fail cheaply and innovate services in a way that wouldn't have been possible on premise.

"During development we were able to stand up like-for-like services, only with different storage backends.

"We were able to replay the same web code against both of those services, and profile the performance and cost characteristics of those services in order to make the most informed choice possible about what the correct storage backend was."

But make sure your testing teams are suitably equipped

However to get the most out of such experiments, the team testing technologies should be relatively nimble, small enough not to get bogged down but with enough power to take decisions.

That was the experience of the UK-based newspaper the Financial Times (FT) when building a data platform to analyse information from across the business—eventually settling on Amazon Redshift and a range of other AWS services.

The group that identified which technologies to use was composed of about eight data engineers from across the organization.

"The diversity of the team and the fact that people understood the FT and were part of the FT for a significant period meant they were able to run and run fast," said John Kundert, CTO for the FT.

"We decided that we were going to trust this team to make some big decisions for us. That letting go, that no RFP [Request for Proposal], that no central control function, was a big thing.

"The fact they could decide what technologies to use. The fact they could decide what part of the business they were going to support first. All that lay within the team. The only thing we insisted on was that their experiments were continuous, small and that when they learnt things they fed that back."

Be prepared for integration to be more complex than you expect

When the FT built its data platform on top of AWS it gradually realised that integrating it with the paper's systems handling data on subscribers and content was more complex than originally thought.

The primary reason was that these membership and publishing platforms were also being transformed, at the same time as the data platform was being built.

"It's worth recognising that within this effort, in parallel, we were transforming the whole of our technology across the digital business," said Kundert.

"When we started this project, we were looking at current integrations but when we came back and tried to integrate, we recognised that all of that world was changing.

"So we had legacy and we had new...and that interchange became more complex."

Also see

About Nick Heath

Nick Heath is chief reporter for TechRepublic. He writes about the technology that IT decision makers need to know about, and the latest happenings in the European tech scene.

Editor's Picks

Free Newsletters, In your Inbox