Containers are where the momentum is, in enterprise computing. Even VMware, the last stalwart of traditional virtual machines, is embracing containerization with their absorption of Pivotal. Revenue for the container software market is anticipated to grow 30% annually from 2018 to 2023—surpassing $1.6 billion—according to a recently published IHS Market report.
From a deployment standpoint, containers are still just different enough of a paradigm that adoption can become complicated at scale. While Docker itself is straightforward enough, automating update lifecycles across dozens or hundreds of deployed containers requires some level of automation in order to increase efficiency.
SEE: Multicloud: A cheat sheet (free PDF) (TechRepublic)
Kubernetes was developed to address these needs, though Kubernetes itself introduces an additional layer of difficulty, requiring manual configuration for configuration, software dependencies, networking, and management of logging and tracing, and other debugging. Red Hat’s proposed solution to this is Knative, an orchestration platform for an orchestration platform, specifically for serverless applications—things that would otherwise run on AWS Lambda, for example.
Knative allows organizations to run their own serverless architecture on their own servers—in practice, this is more normal than it sounds. “There are many, many reasons… the most common one we hear from our customers is around security,” William Markito Oliveira, senior manager of product management at Red Hat, told TechRepublic. “Financial institutions, healthcare, they would prefer to [be] running on their own data centers, or they can’t move all the data that they have to the cloud.”
“The other is around portability, and the vendor lock-in story, where they want to do serverless but on their own terms,” Oliveira continued. “They want to be able to run the same kind of workload… without having to rewrite all their applications for that specific implementation of serverless.”
“One of the key benefits that you get out of Kubernetes,” according to Oliveira, is consistency. “For every Kubernetes cluster, that application is going to behave exactly the same way regardless of which Kubernetes distribution you are using, or regardless of where that particular Kubernetes [cluster] is running. [Moving] from Cloud provider A to Cloud provider B, there is always some rework that you have to do at the application level.”
OpenShift can also be used to run KNative, according to Oliveira. “With OpenShift 4 it’s fairly straightforward for you to get an OpenShift cluster running on pretty much any cloud provider nowadays. Once you have that… you have a KNative operator there. Click install, wait a couple minutes, and it’s done. The whole platform is set up for you.”
The prospect of this might also be counterintuitive, as one of the principal benefits of serverless platforms is the on-demand billing process that it provides—compared to a traditional VM, if an application only actively run for a few minutes each day, the cost savings of using Lambda over EC2 are immense. OpenShift metering does provide insight into how often applications run, and can manage the Kubernetes cluster autoscaler accordingly.
For more, check out “VMware’s Pivotal purchase looks toward a containerized, not virtualized, future” and “Multicloud deployments are twice as likely to fall victim to security breaches” on TechRepublic.