Many IT or security professionals evaluating cybersecurity solutions get tripped up over the definitions of “autonomous” and “automated.” Despite popular belief, these terms are not synonymous, but each carry a distinctive, separate meaning worth establishing when looking at security strategies.
I spoke with Scott Totman, vice president of engineering at DivvyCloud to discuss the differences between “autonomous” and “automated” solutions and to learn more about the best use cases of artificial intelligence/machine learning (AI/ML) in cybersecurity.
SEE: Windows 10 security: A guide for business leaders (Tech Pro Research)
Autonomous and automated: Defined
Scott Matteson: Can you define “autonomous” versus “automated?”
Scott Totman: The easiest way to distinguish between “autonomous” and “automated” is by the amount of adaptation, learning and decision making that is integrated into the system.
Automated systems typically run within a well-defined set of parameters and are very restricted in what tasks they can perform. The decisions made or actions taken by an automated system are based on predefined heuristics.
An autonomous system, on the other hand, learns and adapts to dynamic environments, and evolves as the environment around it changes. The data it learns and adapts to may be outside what was contemplated when the system was deployed. Such systems will ingest and learn from increasing data sets faster, and eventually more reliably, than what would be reasonable for a human.
It’s reasonable to view both automated and autonomous systems on a continuum. Systems that were originally automated with a well-defined set of inputs and outputs may need to become ‘smarter’ over time as their usage and the environment in which they operate change. Therefore, one could take an automated system and build in some autonomous capabilities, extending the useful life of the system and its overall applicability.
Looking at this another way, an automated system is one that’s instructed to perform a set of specific tasks with well understood parameters that are known ahead of time. It is built to perform a specific function repeatedly and in an efficient manner. An autonomous system is advising and helping to define what the right decision or action is under an evolving, non-deterministic environment.
SEE: Ebook5: 16 must-read books on the impact of AI, robotics, and automation (TechProResearch)
Scott Matteson: Which approach is superior?
Scott Totman: It completely depends on the problem being addressed. An autonomous system is often considered ‘superior’ simply due to the increased complexity in its processing capabilities.
However, if you build a system that is highly predictable and performs the same function repeatedly, then an automated system will provide superior value because it is simpler, easier to maintain and requires fewer resources to continue working. Leveraging autonomous systems for these types of solutions could result in the systems ‘learning’ incorrectly and therefore performing the wrong action. Autonomous systems will be truly superior in environments that can not exhaustively test for all conditions ahead of time and need to adapt/learn as the environment and other inputs evolve over time.
Real world examples
Scott Matteson: What are some real world examples of each?
Scott Totman: An example of an automated system is infrastructure and application level compliance checks within a corporation’s environment. These systems monitor against a well-defined set of compliance standards and inform the organization when systems fall out of compliance. These systems can also take well-defined actions to correct the issue, but this does not imply that they are autonomous.
They are explicitly configured to take a specific action, thereby allowing the organization to have confidence in exactly what is happening to their environments. More often than not, these systems simply flag an issue so that a user or administrator can go in and correct the issue. This is an assistive technology, where it is helping a human perform their job, not replacing one.
An example of an autonomous system is network intrusion detection–looking for anomalies in otherwise normal network traffic. This includes detecting credential stuff attacks, wherein hackers leverage valid credentials purchased off the dark web to authenticate and compromise a user’s private information on a given system or, worse, move money out of an account in the case of financial institutions.
Autonomous systems use ML to distinguish legitimate customer traffic from credential stuffing logins and block the attempted attack. Autonomous systems are also becoming able to find zero-day exploits before they execute. Most zero-day exploits have some form of heartbeat or other behavior as they wait for instruction. ML-based systems can detect low frequency, low volume signatures to identify these exploits and in some cases, disarm the attack before it occurs.
For simpler analogies, I like the Roomba as an example of a rudimentary autonomous system. Its function is to clean the floor, however, it decides where to clean based on feedback from its environment. As it runs into objects, it learns to avoid them over time and builds out a map of the space it cleans. It needs to continually learn as furniture, objects, and pets continue to change the environment in which it operates. I refer to it as rudimentary only because it can get stuck, so it still has more learning to do as the model evolves.
SEE: IT leader’s guide to the automated enterprise (TechProResearch)
Impact of AI/ML
Scott Matteson: How does Artificial Intelligence/Machine Learning factor in?
Scott Totman: ML can factor in both automated and autonomous systems. For automated systems, ML can be leveraged to handle more complex environments and scenarios while still performing the same function and not introducing uncertainty into the automation.
Specific to compliance systems, ML can be leveraged to increase the intelligence of the automated system allowing it to minimize the amount of false positives as well as anticipate systems that are about to go out of compliance. For example, it can identify precipitating events that frequently result in a system going out of compliance and other ‘upstream’ activities that put a dependent system at risk. The affected systems could be placed on a watch to more quickly identify and take action if/when the system actually falls out of compliance.
AI and ML are an integral part of an autonomous system. For cybersecurity, an autonomous function can’t operate in an ever-changing environment with an increasing number of attack vectors without some form of built in intelligence. As adversaries change their attacks, these systems learn to identify them, either through additional training data, improved learning algorithms, or more advanced techniques.
AI and ML will have an increasing role in automated and autonomous systems going forward. The dramatic increase in available data at ever-decreasing costs, coupled with the increases in scaled processing power made possible by the cloud, have made the barriers to entry for ML and AI technologies lower than ever. This trend will accelerate over time. For automated systems, ML will enable them to be more resilient and efficient. For autonomous systems, AI will bring about more reliable and sophisticated decision making.
SEE: Special report: A guide to data center automation (free PDF) (TechRepublic)
Scott Matteson: Where does this work best?
Scott Totman: Automated systems work best in well-defined environments with clear functions to perform. These systems can be built efficiently, and operate much faster than a human. One area, specific to security, that comes to mind is in validating an infrastructure template. As infrastructure increasingly becomes software defined, a CI/CD like process is needed to validate the configurations. This can be viewed as a pre-deployment compliance check to make sure the infrastructure is provisioned correctly and that human errors are caught.
Autonomous systems are most effective in an ever-evolving landscape such as new attack vectors and increased attack surfaces. These systems need access to datasets from which to learn from and new algorithms to analyze the data differently as the AI space matures.
These systems come at a cost, however, as many are heavily focused on R&D with increasing investments made over time. Due to the increased cost and complexity, these systems are overkill for solving solutions that are just as easily addressed by automation based systems. Over time, autonomous systems will require less training data, and the complexity is already being reduced by a combination of open source projects and cloud provider offerings, but they will continue to be more complex and expensive relative to automated systems. In many cases the value they deliver will be worth the investment, so it is a matter of choosing the right technology for a given problem.
In the future
Scott Matteson: Where is the trend headed?
Scott Totman: Whether working on autonomous or automated systems, the trend is headed towards building more intelligence into systems. For autonomous systems, this implies better decision making and the ability to handle more complexity. These systems will ingest a wider array of datasets to inform their operations and will be granted increasing autonomy on decisions it can make. For automated systems, becoming more intelligent means being more resilient to upstream and downstream systems and being a more reliable component in the larger environment in which it operates.