The massive growth of public cloud has shaken up enterprise security. But, there are certain steps you can take to better protect your organization from threats.
On Wednesday, CloudCheckr CTO and founder Aaron Newman presented a breakout session at the 2015 Amazon AWS re:Invent conference detailing some of the ways that AWS users could secure what they have on the platform, using native AWS capabilities.
Moving to the cloud requires users to rethink perimeter security, Newman said, and how they perform common tasks such as network-based IPS/IDS, network scanning, penetration tests, and vulnerability assessments.
With AWS, your physical assets are already secured. But, you still need to focus on guarding the AWS API. In this way, Newman said, you must learn to see AWS Identity and Access Management (IAM) access as your new physical security.
If you use the AWS platform then, by definition, you share responsibility for security with AWS. As a customer, you are in charge of security for your applications and content, network security, inventory and configuration, data security, and access control. AWS is responsible for securing its core products and infrastructure.
While the security landscape changes with the cloud, some security principles remain. For example, it’s still imperative that you reduce your surface area, or attack surface. Work to limit the number of ways “in” to your organization.
Another principle that remains is defense in-depth. Even with using AWS, you must assume a hacker will be able to get past your first layer of defense, so plan with multiple layers of defense.
There are also some attack vectors that don’t change, even with the cloud. Application-level attacks against the web server, and OS and database vulnerabilities should still be accounted for.
Some attack vectors do change, though. Despite its plethora of services and moving parts, AWS is a fairly homogeneous environment, Newman said. This increases security, but it also increases the risk if someone does gain access. Also, polymorphic targets and mapping and reduced network sniffing are changes that your security team should be accounting for.
So, how do you assess your perimeter security in this new landscape? Leverage the AWS API.
Being that you are limited in the types of testing you can do, use the API to see what you have running, what ports are running, and your security groups and routing tables, among other things.
There are some tests you can run, but AWS has it’s own rules. If you want to run penetration tests, you must ask for permission to run those tests and scans, and you need to follow the rules and guidelines that AWS has set. For example, AWS prohibits testing certain image types. Keep this in mind as you set out to comb over your current security measures.
One of the most difficult things to account for is the sheer number of AWS product offerings. Currently, Newman said, AWS has more than 40 unique services, many with their own access controls. And, some companies have many different AWS accounts. You need a complete inventory of AWS services and accounts in use within your organization if you want to be able to properly approach security.
Once you have an understanding of your AWS use, it is important to understand the limitations and parameters of the individual products.
For starters, AWS Virtual Private Cloud (VPCs) are wide open by default. VPCs are composed of many moving parts including network SCLs, security groups, subnets, and routing tables, so make sure that all of your bases are covered.
Storage should be a key consideration as you seek to hack-proof your cloud. The AWS Simple Storage Service (S3) allows up to 1000 buckets in an account. Newman said users should begin by taking inventory of their sensitive data and making a point to never grant full permissions to anyone, ever. Familiarize yourself with the S3 access controls and take an aggressive security stance, even if you don’t think it’s critical.
The Relational Database Service (RDS) is another popular AWS product that can exist inside or outside a VPC. If your RDS is not in a VPC, Newman said, you should use database security groups to secure it. Also, if you make it publicly accessible, which Newman didn’t recommend, make sure you restrict source IP access and have the latest patches applied. Additionally, make sure you secure your database snapshots.
As you continue to implement AWS products, it’s good to understand how certain things are secured. For example, the Simple Queue Service (SQS) is always publicly available and its security is based on policy documents, so you need to be aware of where to find your permissions in those documents. Simple Notification Service (SNS), on the other hand, has permissions based on topic policies.
Since we are building out security on the AWS API, it’s a good idea to monitor the API itself. AWS CloudTrail records each time your API is called and supports most AWS services. Newman said it’s “like the video camera in your data center.” The problem is, most people don’t turn it on in the beginning. Newman recommends turning it on in every region and setting alerts for any time it could be disabled.
Another good monitoring tool is the VPC flow logs, which record each time packets enter or leave a VPC. It’s the “metadata about who’s talking to who,” Newman said.
Securing AWS against hackers requires a deep understanding of AWS and context around its many facets. If you are interested in using additional tools, Newman said, generic tools won’t cut it. Look for purpose-built tools that were made to work in the cloud.