Oracle lacks something that the big-three public cloud providers have in spades, something money can't buy: Experience running cloud services at scale.
There are many reasons that Oracle's share of the IaaS and PaaS markets remains negligible, but there's perhaps one thread running through them all: Oracle doesn't run any public cloud services at scale. No company on earth knows more about managing data within the traditional enterprise than Oracle, which is why its database out-scales pretty much anything else in terms of vertical scale.
Such deep know-how in scaling up traditional enterprise databases, however, may be precisely the reason that Oracle will struggle to compete in next-generation cloud databases. As every business becomes a (cloud) software business, they're turning to the vendors with deep experience scaling databases horizontally—Amazon, Microsoft, and Google.
In other words, as the world moves online, the winning cloud vendors will be those forced to innovate databases and other infrastructure necessary for running their own businesses. Oracle simply doesn't have the DNA to keep up.
The genesis of a Dynamo
Intriguingly, Amazon's own experience exemplifies this. Amazon was a big-time Oracle customer that couldn't get Oracle's database performance to match its big-time requirements. As Amazon CTO Werner Vogels has detailed: "We were pushing the limits of [Oracle] and were unable to sustain the availability, scalability and performance needs that our growing Amazon business demanded." It wasn't that Oracle's database was bad technology. Quite the opposite. It simply didn't fit the web.
In time, this led to some soul searching: "Our straining database infrastructure on Oracle led us to evaluate if we could develop a purpose-built database that would support our business needs for the long term."
Importantly for Amazon, much of its data didn't require the relational model that has printed money for Oracle over the years: "About 70 percent of operations were of the key-value kind, where only a primary key was used and a single row would be returned. About 20 percent would return a set of rows, but still operate on only a single table."
SEE: Configuration management policy (Tech Pro Research)
Unshackled from the constraints Oracle's RDBMS model imposed, Amazon went to work on what initially launched as an internal database (Dynamo) and eventually became DynamoDB, an externally-facing database service designed for "extreme scale," with all the performance, security, and other requirements companies like Lyft and Nordstrom would require.
The key to DynamoDB's success, at its foundation, is that it grew out of Amazon's own database needs. This wasn't a matter of Amazon reading about customer needs for massive, horizontal scale, but rather grew out of the company's own need for such a database, and experience building out a service to fill those needs.
Dog food for dogs
This concept of "dogfooding" one's own services is critical in the cloud and, really, to any great product company. Server Density CEO David Mytton, in fact, uses this concept to predict how serverless functions will play out across the different public cloud providers:
While Google is relying on machine learning and Kubernetes (as Borg) internally, how are they leveraging Cloud Functions? In contrast, AWS is heavily using Lambda for higher-level services such as their voice platforms. The maturity of their product and rapid feature development is evidence.
There are two ways to read Mytton's argument. The first is to look for product maturity within the platform that depends most heavily upon that technology. AWS is particularly strong for serverless today, for example, Mytton told me, "because AWS is productizing their own usage of it," likely through Amazon Echo. AWS is selling what they've become really good at. In a similar way, Google has released the amazing Kubernetes as open source, built on its years of experience running Borg internally.
SEE: Microsoft shows VMware and Oracle how to get real about open source (TechRepublic)
The kissing cousin to this argument, however, is that we should expect the companies with the most experience running a certain kind of service at scale to deliver products based on that experience. The commonality between the dominant cloud programs (including Baidu in China) is that each has years of experience running consumer services at massive scale, and benefits from a product perspective as a result.
Which brings us back to Oracle.
Feeding dog food to cats
No one knows relational data better than Oracle, and no one profits more therefrom. But as Amazon's experience shows, we're quickly sprinting into a future that doesn't depend on relational data, even if SQL remains welcome, as Pivotal's Dan Baskette has called out. In that future of voluminous, varied, streaming data, cloud databases like DynamoDB are taking centerstage, and legacy databases are starting to see slower growth.
Yes, a database is hugely sticky, making the shift to cloud databases a multi-decade endeavor. But the pace is "quickening," AWS chief Andy Jassy assures us, from Oracle to cloud alternatives.
Oracle, for its part, is trying to shift its business to the cloud as fast as possible, and has shown significant progress. What Oracle will lack, even as it makes this transition, is the DNA—the dog food—necessary to ensure it builds the right product and tests it adequately. Oracle will likely dominate the market for companies that want an Oracle database but running in the cloud. But for those that move on to new-school, cloud-first databases, Oracle simply doesn't have the DNA to deliver.
- Oracle launches container-based development platform, open sources serverless project (TechRepublic)
- Amazon crushing IaaS cloud competition, Oracle gaining ground in SaaS and PaaS (TechRepublic)
- Oracle Ravello falls short of VMware Cloud on AWS in one key area (TechRepublic)
- Microsoft shows VMware and Oracle how to get real about open source (TechRepublic)
- Can't kick the relational database habit? Serverless computing may help (TechRepublic)