If you want to get Gartner analyst Lydia Leong riled up, just tell her that embracing multicloud to gain infrastructure resilience is a good idea. “Multicloud failover is almost always a terrible idea,” she’ll respond, for reasons not too dissimilar from those Honeycomb co-founder Charity Majors recently offered. While both offer sound reasons for eschewing the multicloud path to application resilience, it’s Leong’s voice that CIOs are more likely to heed because of the trust CIOs put in Gartner’s recommendations.
SEE: Research: Managing multicloud in the enterprise; benefits, barriers, and most popular cloud platforms (TechRepublic Premium)
And that voice is emphatic: “Most people—and notably, almost all regulators—are entirely wrong about addressing cloud resilience through the belief that they should do multicloud failover …”.
Getting the IT strategy all wrong
But does it really matter? After all, organizations make all sorts of strategic IT bets, many of which won’t work out in the short- or long-term. Why is Leong so incensed about this particular IT strategy?
Actually, the word she used is “aghast,” and it’s because government regulators, in particular, are marching toward cloud mandates (for resilience standards and testing, among other things, not to mention the European Union planning its own pan-European cloud) that make little sense in the real world. On Twitter, she stressed that “Multicloud failover is complex and costly to the point of nearly almost always being impractical, and it’s not an especially effective way to address cloud resilience risks.” So why do we keep raising it as a cure-all to mitigate dependence on the cloud providers? Because it’s easy to find bogeymen in those cloud providers: “We talk about concentration risk because big scary improbable things catch people’s attention,” she said.
Such talk, however, evidences anemic understanding of how the clouds actually operate, she continued on her blog:
Regulators, risk managers and plenty of IT management largely think of AWS, Azure, etc., as monolithic entities, where “the cloud” can just break for them, and then kaboom, everything is dead everywhere worldwide. They imagine one gargantuan, amorphous data center, subject to all the problems that can afflict single data centers or single systems. But that’s not how it works, that’s not the most effective way to address risk, and testing the “resilience of the provider” (as a generic whole) is both impossible and meaningless.
Instead, clouds are made up of components that talk to each other. When a cloud fails, it’s usually because those components can no longer converse (due, for example, to a network failure). But even here, global outages “have generally been short enough that—given typical enterprise recovery-time objectives for disaster recovery, which are often lengthy—customers typically don’t activate a traditional DR plan,” Leong noted. Sure, it would be better to never go down, but the risk these regulators are over-engineering/over-legislating to avoid is comparatively small.
SEE: AWS Lambda, a serverless computing framework: A cheat sheet (free PDF) (TechRepublic)
Ultimately, Leong stressed, “[T]he huge cost and complexity of a multicloud implementation is effectively a negative distraction from what you should actually be doing that would improve your uptime and reduce your risks, which is making your applications resilient to the types of failure that are actually probable.” Embrace the clouds’ differentiation, in other words, while architecting and testing for application resiliency (e.g., through chaos engineering).
Leong seemed to be responding to European regulators, in particular, but the those beating the “resilience through multicloud” drum come from all over. But wherever the origin, in Leong’s expert opinion they are wrong. Given she is one of Gartner’s foremost cloud analysts, it just might pay to heed her advice. There are good reasons for multicloud—resilience simply isn’t one of them.
Disclosure: I work for MongoDB, but the views expressed herein are mine alone.