The year in cloud computing: Key points in 2012

Thoran Rodrigues looks at some of the developments in cloud computing in 2012.

As a new year begins, it is very important that we take a moment to look back at how remarkable 2012 was for cloud computing. I had originally stated that it would be a year marked by experimentation: companies would begin experimenting with the cloud in earnest, and this experimentation would lead to evolution in several areas of the market, such as security and reliability.

The evolution of the cloud, however, was much greater than what I expected. There were major changes in the competitive landscape of the cloud; there were several new products and services announced by major (and minor) cloud players; there were troubles that highlighted the key concerns companies and people have with the cloud; and much more. Since much of what has happened will directly impact everyone that works with the cloud over the course of 2013, going over the key points is fundamental.

Increased competition

While by most reckonings Amazon has maintained its dominance over the cloud on the Infrastructure-as-a-Service tier, some of the big names in technology have tried to shake up the market by introducing new offerings on this space. Microsoft finally launched Windows Azure Virtual Machine instances; HP launched its cloud services, including virtual machines; Google came out with its Compute Engine service to offer Linux virtual machines; many other large technology firms, such as IBM, have improved their offerings to come closer to Amazon's self-service infrastructure hiring model; and finally, many smaller players have entered the market.

For customers, the increase in competition is excellent news. Not only do they get a broader range of service providers to choose from, but they also benefit from the competitive pressures that many times lead companies to reduce their prices or to improve their quality of service. We saw a similar movement over the course of 2011, when some cloud providers started abolishing inbound data transfer fees, and everyone on the market soon followed suit. This year, we had interoperability benefits: as companies started offering clients the ability to upload, download, and run their own virtual machine images, many of the leaders on the market introduced this capability to their existing services.

The good, the bad and the ugly

Throughout the year, cloud services experienced several significant reliability issues, and Amazon, the market leader, took center stage on many of these. With significant service outages due to storms, power failures, and other reasons, it provided plenty of fodder for cloud naysayers. No failure was more spectacular, however, than the widely reported Netflix service outage on Christmas Eve. It wasn't spectacular because of any technical reason, but because that has to be the worst timing ever, and it led to multiple published news stories on "the dangers of the cloud" and the "unreliability of the cloud".

These stories are less than useless. Yes, most cloud services still have reliability issues. Almost everyone who is serious about adopting the cloud, however, takes these issues into consideration when designing their systems and is largely unaffected by them. Where were all the stories about NASA's Curiosity Landing Mission, which relied heavily on AWS to stream photos and video to millions of people worldwide without a hitch? While I understand that bad news sells better than rosy stories, the time for fear, uncertainty and doubt regarding cloud computing has already come and gone

Moving upmarket

As the benefits and advantages of the Infrastructure-as-a-Service tier of the cloud have become better understood and more widely accepted by everyone, cloud providers who focused on this tier have started offering more value-added services, that could be better classified as platform services, rather than infrastructure ones. Examples of this are the myriad cloud database services that are being offered by Rackspace, Amazon, and everyone else. Amazon's launch of Redshift, a "data warehouse-as-a-service", is just the evolution of this strategy.

From a business perspective, this makes perfect sense for them, since they can charge higher prices for these more complex services. These services are also much harder to compare from a customer perspective, which makes comparing prices alone almost impossible. Finally, they generate much stronger customer lock-in than what infrastructure-as-a-service does. They need, however, to convince customers of the benefits of these services before adoption will rise.

These are some of the developments that made 2012 such an exciting year for the cloud market. Next time we'll explore how some of them will impact 2013, and look at other interesting trends for the new year.


After working for a database company for 8 years, Thoran Rodrigues took the opportunity to open a cloud services company. For two years his company has been providing services for several of the largest e-commerce companies in Brazil, and over this t...

Editor's Picks