Image: Jack Wallen

Most often, when you deploy a pod to a Kubernetes cluster, it’ll contain a single container. But there are instances when you might need to deploy a pod with multiple containers. Two very useful reasons to deploy multi-container pods would be:

  • Sidecar containers: A utility container that helps or enhances how an application functions (examples of sidecar containers are log shippers/watchers and monitoring agents)

  • Proxies/bridges/adapters: Connect the main container to the outside world

The main reason you’d deploy a multi-container pod would be when a single container was incapable of taking care of every aspect of the application. For example, say you deploy a pod for NGINX, but need something to monitor the logs for that container. To do that, you could deploy a multi-container pod.

Believe it or not, the process isn’t terribly challenging.

I’m going to walk you through the process of deploying a multi-container pod to a Kubernetes cluster. Specifically we’re going to create a pod with two containers, one running the NGINX web server that shares a volume with a second container. The second container will then write data to the first, to show how multi-container pods can interact.

SEE: Implementing DevOps: A guide for IT pros (free PDF) (TechRepublic)

What you’ll need

The only thing you’ll need to make this work is a running Kubernetes cluster. If you have yet to spin up your cluster, check out: How to deploy a Kubernetes cluster on Ubuntu server. Once you have your cluster up and running, you’re ready to deploy a multi-container pod.

How to define a multi-container pod

As with everything in the realm of Kubernetes, we define our multi-container pod in a YAML file. So create the new file with the command:

nano multi-pod.yml

In that file, paste the following contents:

apiVersion: v1
kind: Pod
metadata:
name: multi-pod
spec:

restartPolicy: Never

volumes:
- name: shared-data
emptyDir: {}

containers:

- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html

- name: ubuntu-container
image: ubuntu
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello, TechRepublic > /pod-data/index.html"]

Take a look through the YAML file. You’ll see that we’ve deployed one container based on the NGINX image, as our web server. The second container, named ubuntu-container, deploys a container, based on the Ubuntu image, and writes the text “Hello, Techrepublic” to the index.html file served up by the first container.

Save and close that file.

How to deploy the multi-container pod

To deploy this multi-container pod, issue the command:

kubectl apply -f multi-pod.yml

Once the pod is deployed, give the containers a bit to actually change to the running state (although only the first container will continue running) and then access the nginx-container shell with the command:

kubectl exec -it multi-pod -c nginx-container -- /bin/bash

You should now find yourself at the bash prompt of the nginx container. To make sure our second container did its job, issue the command:

curl localhost

You should see the text “Hello, TechRepublic” printed out (Figure A).

Figure A

Our ubuntu-container successfully wrote the required text to the NGINX index.html file.

Huzzah!

And that’s how you can deploy a multi-container pod into your Kubernetes cluster. Although this is a very basic example, it shows you how containers can interact within a single pod.

Subscribe to the Cloud Insider Newsletter

This is your go-to resource for the latest news and tips on the following topics and more, XaaS, AWS, Microsoft Azure, DevOps, virtualization, the hybrid cloud, and cloud security. Delivered Mondays and Wednesdays

Subscribe to the Cloud Insider Newsletter

This is your go-to resource for the latest news and tips on the following topics and more, XaaS, AWS, Microsoft Azure, DevOps, virtualization, the hybrid cloud, and cloud security. Delivered Mondays and Wednesdays