Public and private organizations have determined that moving data and software platforms to the cloud is not an all-or-nothing proposition. IT departments are learning to run a mix of on-premise private-cloud and third-party public-cloud services. Creating a hybrid-cloud platform allows workloads to move between private and public clouds as computing needs and costs change, giving businesses greater flexibility and more data-deployment options.
SEE: Quick glossary: Hybrid cloud (Tech Pro Research)
There are pluses and minuses to hybrid clouds. The convenience and adaptability afforded to those who use hybrid-cloud technology comes with a cost: Security teams must protect company data, and in many cases, proprietary processes across multiple environments. Dave Shackleford, principal consultant of Voodoo Security and a SANS analyst, decided to address these concerns in the SANS white paper Securing the Hybrid Cloud: Traditional vs. New Tools and Strategies.
“As more organizations adopt a hybrid-cloud model, they’ll need to adapt their internal security controls and processes to public-cloud service-provider environments,” writes Shackleford. “To begin, risk assessment and analysis practices should be updated to continually review the items listed in Figure 1.” Those items are listed below.
- Cloud-provider security controls, capabilities, and compliance status
- Internal development and orchestration tools and platforms
- Operations management and monitoring tools
- Security tools and controls both in-house and in the cloud
The jury is still out on who is ultimately responsible for security in the cloud. Shackleford champions the need for cloud-service providers and their clients to share the responsibility. As for the client, Shackleford believes its security team must have:
- A good understanding of the security controls currently in use; and
- An even better understanding of what security controls they will have to modify to successfully operate within a hybrid-cloud environment.
As to why, Shackleford explains, “It’s almost guaranteed that some security controls won’t operate the way they did in-house or won’t be available in cloud-service provider environments.”
In-house processes IT pros should check
Shackleford suggests examining the following in-house processes.
Configuration assessment: Shackleford says the following configurations are especially important when it comes to security:
- Operating system version and patch level
- Local users and groups
- Permissions on key files
- Hardened network services that are running
Vulnerability scanning: Shackleford advises systems should be scanned on a continuing basis, with reporting of any vulnerabilities noted during the life cycle of the instance. As to scanning and assessing any findings, Shackleford notes that one of the following methods is typically used in hybrid-cloud situations.
- Some vendors of traditional vulnerability scanners have adapted their products to work within cloud-provider environments, often relying on APIs to avoid manual requests to perform more intrusive scans on a scheduled or ad hoc basis.
- Relying on host-based agents that can scan their respective virtual machines continually.
Security monitoring: Hybrid-cloud environments almost always exist on virtualized multitenant servers, making them difficult to monitor for attacks on a per-customer basis. “Monitoring virtual infrastructure happens at one of several places: the VM/container, the virtual switch, the hypervisor or the physical network,” writes Shackleford. “In almost all cloud environments, the only place we can truly tap into is the VM/container or software-defined network offered by the cloud provider.”
“Considerations on how to architect monitoring tools include network bandwidth, dedicated connection(s) in place, and data aggregation/analysis methods,” continues Shackleford. “Logs and events generated by services, applications, and operating systems within cloud instances should be automatically collected and sent to a central collection platform.”
SEE: Special report: The art of the hybrid cloud (free PDF) (TechRepublic)
With reference to automated remote logging, Shackleford feels most security teams are already knowledgeable about collecting the appropriate logs, sending them to secure central logging services or cloud-based event-management platforms, and monitoring them closely using SIEM and/or analytics tools.
According to Shackleford, the sky is the limit to what is monitored. He believes the following should have priority:
- Unusual user logins or login failures
- Large data imports or exports to and from the cloud environment
- Privileged user activities
- Changes to approved system images
- Access and changes to encryption keys
- Changes to privileges and identity configurations
- Changes to logging and monitoring configurations
- Cloud provider and third-party threat intelligence
Silos and point solutions are a concern
We have all boxed ourselves into a corner with a service or product. For the very same reason, Shackleford strongly advises avoiding single-vendor or cloud-native options that do not offer flexibility across different providers and environments–at all costs.
“Some vendor products will work only in specific environments, and most cloud providers’ built-in services will work only on their own platforms,” he explains. “Such siloing can lead to major headaches when business needs drive organizations to a multi-cloud strategy, necessitating the re-visitation of security controls that meet requirements.”
Shackleford is a strong proponent of shift-left security, a simple concept that is difficult to implement; the idea is to move security considerations closer to the product’s development stage. “In other words, security is truly embedded with development and operations practices and infrastructure (a practice sometimes called SecDevOps or DevSecOps),” writes Shackleford. “Security and DevOps teams should define and publish IT organizational standards for a number of areas, including application libraries and OS configurations that are approved for use.”
SEE: DevSecOps teams securing cloud-based assets: Why collaboration is key (TechRepublic)
A final caution
Besides the normal due diligence, Shackleford suggests forming a baseline by completing a thorough review of all existing controls and processes before moving data and or processes to the public cloud. “This will give them the opportunity to adequately protect the data involved, as well as look for equivalent security capabilities in public cloud environments,” advises Shackleford. “Look for tools that can help you manage both in-house and cloud assets in one place, because security and operations teams are usually spread too thin to manage multiple management and monitoring tools across one or more cloud provider environments.”