Containers require an array of necessities in order to work properly. Focus on these six.
Containers are powerful with its breadth of capabilities and the ease in which it can provide applications or services. Ironically, while the goal of containers is to reduce moving parts for the sake of simplicity and efficiency, there are multiple complex considerations behind the scenes, which must be attended to in order to benefit from its deployments. (Note: This article about six factors to consider when deploying containers is also available as a free PDF download.)
I spoke with Scott McCarty, Principal Product Manager of Containers at Red Hat, to discuss the topic further.
SEE: Vendor comparison: Microsoft Azure, Amazon AWS, and Google Cloud (Tech Pro Research)
McCarty shared that in the enterprise space, it's important to consider factors including (but certainly not limited to) the six concepts below
Developers don't generally think about potential problems from a performance perspective, but just because you can access an application with your web browser doesn't mean it will handle a huge amount of concurrent transactions. You won't know how well it handles until it is truly put to the test. Your application may "work on my box" but will it perform at 1.5M transactions per second in production?
Kubernetes can scale up but it also eats up a ton of resources doing so. Containers help architectural problems and make sure all necessary dependencies are there, but it doesn't automatically apply performance after it's been rolled out.
The quality of the underlying language runtimes, web servers, and libraries like openssl all have an effect on performance. Make sure your Linux distribution has a proactive group of performance engineers testing for regressions, and more importantly, tuning the entire stack for performance.
In the world of Linux different programs run on a kernel. Most programs use the syscall layer, an API which interfaces with the kernel. When you stick to the syscall layer in Linux, forward compatibility works fairly well.
Linus Torvalds, the founder of Linux, has a strict rule that the kernel should not break backward compatibility. Containers will always be forward compatible since the syscall layer is careful not to break that functionality.
But what happens when you run a newer container image on an older kernel? Or what happens when you tiptoe outside the syscall layer and into APIs like ioctl, /dev, /proc, etc.?
There are temporal and spatial limits to compatibility. Good design and testing can help here. The Linux distribution for the Container Image and Host needs to consider these problems deeply, or users will get caught in a broken state. This is true at the kernel layer, the compiler layer (gcc), and the library layer (glibc) as well as the APIs that are outside the syscall interface.
Another issue is that if you only use the syscall functions associated with the C library you'll probably be fine. However, it's more likely that your app will drag in ancillary pieces of software not part of the app like troubleshooting or monitoring elements, which use other kernel APIs like ioctl /proc or /dev, this might cause issues.
If you upgrade the container host it might not run the container anymore. In the virtual machine world you usually don't have to worry (however it's important to note even with virtualization some architectures or chipsets might cause issues), but in the physical server world, some architectures or chipsets might cause issues.
SEE: System update policy template download (Tech Pro Research)
3. Integration with existing infrastructure
The ecosystem of hardware and software, which is supportable is mappable to the underlying Linux distro. If you need ARM support it has to have it. Think about supportability--this applies to the container host for hardware and the container image for software.
This is an often forgotten "buying criteria" when selecting container images and container hosts. But remember, the ecosystem of software for your Linux distribution (hardware and software) is the ecosystem that will be available for your container hosts (hardware) and container images (software). If your Linux distribution supports a particular piece of hardware or cloud provider, then your container hosts will be able to run without a problem. Weird bugs have happened in the ecosystem of other things involving the Linux kernel.
If your applications are designed and built for a specific distribution of Linux, it will be much easier to put those applications into container images based on that distribution of Linux.
SEE: Hybrid cloud: A cheat sheet (TechRepublic)
Similar to performance, security is not something that can be solved by "it works on my laptop." Once a container image is put into production, it will expose your application and all of its dependencies to all of the dangers of the internet. This includes denial of service attacks, data breaches, trojan images, and hacking. All of these things need to be considered when selecting Container Images and Container Hosts for your container environment.
Perhaps you didn't download that container image from the Red Hat container catalog but chose to do so from some suspect site in China. This is a very bad idea. If you don't start with a known good container someone can inject malicious code in it, and you will not know.
The container world can learn from the XCode Ghost hack--someone inserted a trojan into an application and got it uploaded to the Apple store. There was a similar episode involving Docker Hub a while back--5 or 10 million downloaded a bad container.
Knowing the quality of your components is crucial from a security perspective; always use a source you trust. Determine why you trust them--is it the code quality, patches, etc? Keep in mind a container may be good on the day you downloaded it, but what about in three years?
Containers do a lot of builds, whereby you recompile the app every time you rebuild the container (common with C and GoLang programs) are common trends. You statically compile into the binary and run with it to build scratch containers--compile everything you need in a binary and ship.
This makes for the smallest possible image, but it is not that convenient. Now the developer is responsible for everything in that image. One year from now when something in one of the libraries breaks backward compatibility, and the container fails, who fixes that? Whoever does has to have a developer skillset--it can't be just operations running "yum update." You might need to recompile app. This is much more technical to change.
The other way is to build it a base image, which has packages, like a web server compiled dynamically with openssl and performance problems can be fixed via "yum update" to get new packages. It's much easier than making code changes, but you end up with a bigger image.
As soon as you add software it doesn't matter what size the base image is, it's now 400 or 500 Mb.
There are two main styles of containerized applications being developed. Those that are built on Linux Base Images, and those that are being built from scratch.
In both of these application styles, users are often sensitive to the size of their container images because it affects how long it takes to pull Container Images to Container Hosts. When building from scratch and deploying statically compiled binaries (common with Golang), it's important to select a small base image or build from scratch.
When building an entire ecosystem of software for use within an enterprise, it's more important to think about the size of the entire supply chain (all of the RPM packages and their dependencies) because base layers can often be shared and cached. Reducing the attack surface is about reducing the footprint in the entire environment by reducing duplicate copies of libraries and language runtimes.Google Cloud Platform: An insider's guide (TechRepublic download)
Support comes in two major forms: Lifecycle and white glove support.
Lifecycle is what governs the amount of time, which patches will be available for any given packages (RPMs or Debs) within a container image.
White glove support is what allows you to file tickets, get hotfixes, and advocate for upstream changes.
Both are extremely important depending on the amount of time you will be supporting your containerized applications (hint: longer than you think).
The lifecycle support context is significant because your app is going to run longer than you think. It might be three to five years, or maybe longer. There are plenty of instances of apps/systems, which have sat and ran for five years. You must consider how long that base image will have support to run a yum update. Then you have to go back to the first model to make code changes, implement different versions of libraries and put it back in developer hands, which can be costly.
Ask yourself: Do my container images have updates? Can I call someone if something is broken and get a patch to fix it? Can I drive the patch if I have a unique problem? Being able to file a ticket and drive the operation is a different level of support than just running "yum update."
- Why today's containers and microservices will be tomorrow's legacy sooner than you think (TechRepublic)
- Azure Container Instances simplify serverless Linux and Windows containers in the cloud (TechRepublic)
- Open, Container-Based Development Will Power Tomorrow's Business-Critical Apps (TechRepublic)
- X-ray your containers with Google's new Kubernetes monitoring tool (TechRepublic)
- What is cloud computing? Everything you need to know about the cloud, explained (ZDNet)
- Best cloud services for small businesses (CNET)
- Microsoft Office vs Google Docs Suite vs LibreOffice (Download.com)
- Cloud computing: More must-read coverage (TechRepublic on Flipboard)
- What does Linux have to do with containers? Everything (TechRepublic)