Within the realm of Kubernetes (K8s) security, Portshift is one of the many industry leaders; the company focuses on identity-based workload protection for cloud-native applications. Portshift offers solutions for Kubernetes, Zero Trust security, DevOps, and compliance.
Anyone that has followed Kubernetes and containers over the past few years knows that security has become a central point of failure for this technology. Security issues can arise from nearly any point—from container images, runtime engines, poorly secured networks, etc. So for any business looking to adopt container technology, the importance of security cannot be overstated.
Portshift recently released a best practices list for tackling the security issues surrounding the K8s platform. Let’s look at these security tips.
SEE: Kubernetes security guide (free PDF) (TechRepublic)
Authorization is probably one of the most overlooked issues with Kubernetes. Why? It’s not always a simple hurdle to overcome, because you have to deal with authorization on multiple levels. Consider this: You have authorization from within images, configuration files, third-party applications and services, various developers and/or users… the list of possible authorizations goes on and on. That is why permissions in Kubernetes is handled by role-based access control (RBAC).
This mechanism gives you powerful, fine-grained control over authorization and access. The RBAC API declares four top-level types:
- Role can only be used to grant access to resources within a single namespace;
- ClusterRole adds cluster-scoped resources, non-resource endpoints, and namespaced resources to the Role type;
- RoleBinding grants permissions defined in a role to a user or set of users; and
- ClusterRoleBinding is the same as RoleBinding, but across a cluster.
It is imperative that every K8s admin understand RBAC authorization. For more information, make sure to read the official RBAC documentation.
The Portshift best practices list also includes the ABAC authorization method, but warns that it does include a few operational constraints.
2. Pod Security Policies
The next best practice is pod security. A pod is an object that contains a set of one or more containers. According to Portshift’s best practices, “it is essential to control their deployment configurations. Kubernetes Pod Security Policies are cluster-level resources that allow users to deploy their pods securely by controlling their privileges, volumes access, and classical Linux security options such as seccomp and SELinux profiles.”
Note: A Pod Security Policy controls sensitive aspects of your pod specification. The PodSecurityPolicy object defines a set of conditions a pod must achieve in order to be accepted into the system; if the pod cannot achieve such a state, it will not be accepted. Pod Security Policies allow an admin to control such things as:
- Running of privileged containers
- Usage of host namespaces
- Volume type usage
- Host filesystem usage
- Requiring usage of read-only root file system
- User and group container IDs
- Restrict escalation to root privileges
These are fairly sensitive aspects to your pods, and you need to pay close attention to not only how you set your Pod Security Policies, but who has access to them.
SEE: Mastermind con man behind Catch Me If You Can talks cybersecurity (TechRepublic download)
3. Secure the production environment
The security of your Kubernetes deployment is only as sound as the production environment it is deployed from and to; this should go without saying, but it does get overlooked. Portshift says this about the issue:
“As companies move more deployments into production, that migration increases the volume of vulnerable workloads at runtime. This issue can be overcome by applying the solutions described above, as well as making sure that your organization maintains a healthy DevOps/DevSecOps culture.”
Your production environment must be secure. From your networks to your development environment, including developer desktops, servers, and–as Portshift pointed out–your DevOps culture. If your developers aren’t working in a secure environment, the chances increase that your Kubernetes deployments can be compromised.
4. Securing CI/CD pipelines
Continuous Integration/Continuous Delivery (CI/CD) allows you to do pre-deployment build-outs, testing, and deployment of workloads; it also enables the automation of many deployment tasks. To make this work, you’ll use a number of third-party tools such as Helm and Flagger.
In order for your Kubernetes deployments to enjoy even a modicum of security, you must lock down everything within your CI/CD pipelines. You absolutely must roll in tight security practices into this pipeline and every piece of software or service that touches it; otherwise, according to Portshift, “attackers can gain access when these images are deployed and exploit these vulnerabilities in K8 production environments. Inspecting the code of images and deployment configurations at the CI/CD stage can achieve this purpose.”
This particular portion of Kubernetes is where a lot of security can break down. If you’re unsure of how a particular tool accesses your CI/CD pipeline and how it handles things like authorization, learn everything you can about it. A single point of failure in your CI/CD pipeline could be catastrophic to the security of your Kubernetes deployments as a whole.
5. Add service mesh to the network security layer
Network security is crucial to your Kubernetes deployments and should not be overlooked. A service mesh boosts network security by adding a dedicated infrastructure layer to facilitate service-to-service communications between microservices and balances inter-service traffic based on specific policies.
To this issue, Portshift says:
“It [service mesh] also offers a number of security, reliability, and observability benefits that can help manage cluster traffic and increase network stability that is enhanced by a ‘zero-trust’ security model.”
Istio is currently your best bet for service mesh. Istio helps you to intelligently control the flow of traffic and API calls between services, automatically secure your services through managed authorization, apply policies, and observe what is happening with automatic tracing, monitoring, and logging.
This is yet another layer you are adding to an already complicated fabric of layers, so before you employ the likes of Istio, be sure you understand it.