Derrick Harris offers a great analysis of Google’s AI-driven approach to cloud computing, but the subtext is perhaps even more interesting: The cloud is on lock-in overdrive.

We’ve spent the last decade unhooking data infrastructure from proprietary systems. As Cloudera co-founder Mike Olson rightly stated: “No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form.”

Well, except for the big-three cloud providers, each of which is wholly proprietary, and becoming more so with each passing day. As Harris wrote, “All three providers will offer Nvidia’s best GPUs for rent in their clouds, just like they all offer managed versions of popular technologies such as Spark or MySQL, but they hope to make their marks with homemade technologies you can’t get anywhere else.”

“Homemade” is another way of saying “you can check out any time you like, but you can never leave.”

A one-way red carpet

This Hotel California of cloud computing is not a new argument. Heck, I first dubbed it as such back in 2009 in a CNET article, and have written extensively about the cloud and lock-in (see here, here, and here, just for starters). On the one hand, cloud consummates the bargain initially struck with developers by open source licensing: Big productivity gains by making software easily accessible. Open source makes software as easy as a download, but cloud goes one step further, removing the need to download (and even removing the bother of licensing at all–access the API and away you go).

SEE: Open source risks being devoured by the very cloud to which it gave birth (TechRepublic)

The major cloud providers have taken developer convenience as gospel and are now determined to offer more and more software as a service. Indeed, as AWS AI/machine learning chief Matt Wood told me in an interview, “AWS customers get to think about building applications with services, not servers. With S3, DynamoDB, and Lambda, you can build apps without thinking about the underlying infrastructure.”

The less developers have to think about infrastructure, the better, right? Except the less they bother with the substrate for their applications, the more they build on “homemade technologies you can’t get anywhere else,” and will have their apps hard-wired to that particular vendor. Choice has a cost, but convenience, it turns out, also has a price.

Can you be boring and agile?

The open question for each of the cloud vendors, and one that Harris calls out, is whether they can deliver services with trust. That is, can they give developers the convenience they desire while also offering CIOs (and developers) the predictability of boring, less convenient enterprise software…and an exit ramp.

This is where companies like Red Hat aim to make their mark, as I’ve written. Red Hat has positioned its OpenShift service as a mediator of sorts for the fast-moving (and sometimes breaking) cloud services, giving developers a consistent app development layer to which they can write. This consistency is what CIOs demand, and it’s the primary value Red Hat has long offered in non-cloud deployments.

SEE: Red Hat: The cloud needs an OS, and OpenShift is just the thing (TechRepublic)

It’s not that the major cloud providers can’t individually demonstrate their enterprise bonafides. While Google has been called out as having the most to learn in this area, it’s making steady progress. Microsoft, meanwhile, is a C-suite darling and AWS gets voted as many CIO’s go-to cloud vendor. Each is becoming “boring” in its own right.

However, the real mark of an enterprise cloud strategy is to deliver this predictability across clouds (as well as on-prem resources). Given how much each of the individual clouds is investing in services that isolate workloads, it’s unlikely that any of them can, or would, want to provide this cross-cloud consistency. This leaves an opening for Red Hat, Cloud Foundry, and others. Whether they’ll capitalize on it, however, is another thing altogether.