Amazon's S3 recently experienced widespread downtime, impacting multiple customers in its US-East region. Is this an indicator that you should rethink your cloud plans?
On Tuesday, Amazon Web Services (AWS) experienced outage-like issues with its S3 cloud storage, taking some business customers offline and causing slowdowns for others.
AWS has existed for longer than a lot of us realize—S3 is the oldest iteration of it, and it's been around since 2006. Downtimes are rare in the public cloud, and any interruption can seem like the end of the internet as we know it.
One look at Twitter and you'll find countless people who are locked out of essential services: IFTTT was completely knocked offline, Slack was decidedly less chatty, and other East Coast businesses were suffering severe slowdowns and lag times.
Amazon hasn't called this error an outage, saying instead that it was an error rate issue that was simply causing massive slowdowns. If all of this is bringing back memories of the 2015 AWS outage you might be rethinking business in the public cloud, but don't start backing out yet.
Amazon's stated S3 uptime goal is 99.99%, also known as "four nines," which equates to around an hour of downtime per year, according to Dave Bartoletti, public cloud analyst at Forrester Research. Instead of downtime, though, Bartoletti said we need to think about S3's actual uptime.
"S3 has consistently outperformed the four nines they shoot for, year over year," Bartoletti said. He also added that the 2015 AWS outage wasn't even S3.
AWS, Bartoletti said, is the perfect example of cloud done right. "This isn't a normal incident, nor do we see any indication that the public cloud is becoming unreliable," Bartoletti said. "It's simply a hiccup."
Should you still reconsider?
Outages like this one may be short, but that doesn't mean they don't result in lost revenue. Some e-commerce sites and companies that rely on visitors to earn revenue simply can't make money if no one can reliably access their site.
Does that mean the public cloud is immature, unstable, or simply not a good idea? Not at all, Bartoletti argued. "No data has been lost due to S3's incredible redundancy, which is a key feature of the public cloud. It's backed up around the world."
So, how should a company approach a move to the public cloud? Bartoletti said there are two things to consider.
First, check with potential hosts to see what their uptime has been for the past two years. It's unlikely, he said, that local hosting or a private data center can match it.
Second is the issue of what to do instead of the public cloud. The high level of redundancy that it offers means a company would need to have private data centers all over the country that would serve the site and act as backups. The budget needed to build that many data centers alone would be enough to bankrupt many companies not to mention additional maintenance costs.
Public cloud outages can seem alarming, but when four nines or better is your average uptime, there's not much to worry about. According to Bartoletti, the alternatives are simply too expensive or impractical to be realistic for all but the largest organizations.
- 3 public cloud myths highlighted by the Amazon S3 outage (TechRepublic)
- Amazon Web Services: The smart person's guide (TechRepublic)
- 5 steps for a successful large-scale cloud migration to AWS (TechRepublic)
- AWS investigating S3 problem at major data center location (ZDNet)
- How Amazon is planning for the second decade of the cloud revolution (TechRepublic)
- Snap commits to spending $1B with AWS (ZDNet)
- AWS isn't the cheapskate's cloud, and Amazon doesn't care (TechRepublic)