Deploying containers: Six critical concepts (free PDF)
Containers are designed to simplify deployment and maintenance of apps and services—but the benefits could be lost if you don’t keep certain factors in mind. This ebook offers an overview of key considerations to help make your deployments successful.
From the ebook:
Containers are powerful, with their breadth of capabilities and the ease with which they can provide applications or services. But ironically, while the goal of containers is to reduce moving parts for the sake of simplicity and efficiency, there are multiple complex considerations behind the scenes that must be attended to in order to benefit from your deployments.
I spoke with Scott McCarty, principal product manager of containers at Red Hat, to discuss the topic further. He said that in the enterprise space, it’s important to consider factors including (but certainly not limited to) the six concepts below.
Developers don’t generally think about potential problems from a performance perspective, but just because you can access an application with your web browser doesn’t mean it will handle a huge amount of concurrent transactions. You won’t know how well it handles until it is truly put to the test. Your application may “work on my box” but will it perform at 1.5M transactions per second in production?
Kubernetes can scale up but it also eats up a ton of resources doing so. Containers help architectural problems and ensure that all necessary dependencies are there, but it doesn’t automatically apply performance after it’s been rolled out.
The quality of the underlying language runtimes, web servers, and libraries like openssl all have an effect on performance. Make sure your Linux distribution has a proactive group of performance engineers testing for regressions, and more importantly, tuning the entire stack for performance.