Developers aren't turning to serverless architectures because they're cheaper, but rather because they boost productivity.
There are many good reasons to embrace serverless, the hot new trend in cloud computing. Cost isn't one of them. At least, it's not the best reason.
Oh, sure, serverless architectures promise to cut costs by enabling enterprises to pay solely for the compute time a service like AWS Lambda burns. For example, Coca-Cola is using AWS Step Functions to simplify business logic and reduce costs related to a loyalty program at vending machines, while also expecting to cut video presentation costs by 90% elsewhere using AWS Lambda. Those cost savings are real.
They're also not the point. Not really. These same arguments were trotted out in the early cloud days ("Pay only for what you use!"). Serverless simply takes those early arguments and extends them. The real push for the cloud, however, hasn't been driven by cost: It's a matter of convenience. Serverless may push costs down, but it also pushes convenience way up, and this is why it will win.
Putting a price on serverless
Not that analyst firms aren't rushing to help developers calculate just how much they can save with serverless. 451 Research analyst Owen Rogers, for example, has a great post elucidating the benefits of running serverless.
As Rogers wrote, there are two primary cost benefits to serverless. The first is all about people. This benefit is analogous to benefits developers derive from the cloud, but goes further: "The benefit of IaaS is that employees don't have to spend time procuring and installing a physical server; the benefit of serverless is that employees don't have to spend time procuring and installing any server, physical or virtual." In other words, the first big benefit for developers embracing serverless is that their operational burden goes way down.
SEE: Cloud computing policy template (Tech Pro Research)
The second benefit stems from developers only needing to pay for the compute cycles they consume. This sounds suspiciously similar to the benefits called out for infrastructure-as-a-service (IaaS), but the difference comes from the underlying technology used to deliver serverless, as Roger wrote:
With serverless, users are charged only for the time they are actively using the platform. With IaaS, a developer needs to have a VM up and running to ensure that code can be executed quickly; thus, there will be times when the VM is idle, and this is sunk cost - wasted expenditure. Even with auto-scaling of servers, there is usually a buffer of over-provisioned resources that exist to provide capacity while additional VMs are spun up. And even as the VMs scale, unused capacity continues to ramp up in large steps. This is not an issue with serverless technology.
Will serverless always be cheaper than a VM-driven public cloud? By Rogers' estimate, this is true "only where the number of times the code is executed is under about 500,000 executions per month." Otherwise, VMs may be the way to go.
And yet they won't be. The reason is convenience.
Putting a price on freedom
This isn't to say that cost considerations play no part in the serverless decision. They do. As Dan Martin, head of APIs at Early Warning, put it to me: "As an architect making serverless decisions for a large company, price is definitely a big part of the equation. It's material enough to allow me to get projects funded that might not otherwise see the light of day."
When I pressed Martin on whether his team could accurately project serverless costs, his response was both "yes" and "no":
We try to scope our serverless footprint and expected volume and make a guess based on [AWS] Lambda pricing. Too early to tell you if we're accurate, but we could be off by 50x and still be happy.
It's that second point that suggests cost arguments may simply be a way for developers to justify what they already intend to do, and that intention is driven by 451 Research's first cost factor: Developer time. If Amazon, Microsoft, and Google can remove the need to fiddle with servers (whether physical or virtual) and instead allow developers to focus completely on writing their applications, that's a huge boost to productivity, one driven by developer convenience.
SEE: Serverless computing: The smart person's guide (TechRepublic)
This same convenience/productivity calculus also explains 451 Research's further analysis, which dissects the cheapest serverless options between AWS, Microsoft, Google, and IBM, and concludes, "When users' memory requirements match predefined size allocations, we find IBM is cheapest for scripts of 0.1 seconds in duration and Azure is cheapest for scripts of 10 seconds' duration." If anyone thinks developers are therefore going to flock to IBM's cloud when they've hitherto ignored it (IBM is a rounding error and losing share in Gartner's latest numbers), they're confusing the cost calculation yet again.
Developers tend to standardize on a chosen provider, with AWS the most common but Microsoft Azure and Google Cloud also starting to fill key roles for areas like machine learning/AI. A few pennies saved here or there simply isn't the developer's primary focus--getting a winning application is, and that will have more to do with the core characteristics of the cloud and the developer's familiarity therewith.
By all means, developers should continue to point to superior cost profiles for serverless as a way to justify adoption. That's fair. But the real reason they're embracing serverless has less to do with the money it will save the company, and more to do with the productivity it will foster in the developer.
- Serverless but not stress-free: enterprise computing moves outside the enterprise (ZDNet)
- Stressed about serverless lock-in? Don't be (TechRepublic)
- Why serverless computing makes Linux more relevant than ever (TechRepublic)
- Will the cloud go serverless? (ZDNet)
- The future of serverless cloud looks a lot like physical servers (TechRepublic)
- Can't kick the relational database habit? Serverless computing may help (TechRepublic)