IT risk management depends upon solid intelligence-gathering capabilities. Faulty data can lead to devastating results.
Avoiding security disasters often means knowing what your risks are to begin with - a process that is often referred to as risk management. However risk management doesn't come without some risks of its own. Especially when one considers that the primary risk of risk management comes in the form of bad data, or more specifically, data resulting from incorrect intelligence. That incorrect or false data can lead to very poor decisions being made and the consequences of those decisions can be devastating to business operations.
So how exactly does one avoid misleading information and false data? It comes down to the age-old process of vetting the data sources. In the security and IT risk management arenas that vetting process starts with identifying sources and progresses to building realistic baselines based upon verifiable facts. Simply put, creating risk management plans and planning for remediation is near impossible without the proper tools.
Building a baseline
Building a baseline starts with a process called network discovery, where the network, all of the connected devices, the software in use and internet connectivity is scanned to build an inventory. Additional scans are then performed to identify user accounts, rights, and access policies and so on. Administrators cannot determine risk without that information, and that information is impossible to gather unless automation becomes part of the discovery process.
However, network discovery is not without risks of its own, it is a process that has to be done correctly and with care, and more importantly, multiple times. Why? Simply because building a baseline takes making sure that a proper sample of activity is measured over a standard working period. What's more, networks constantly evolve, with devices coming and going, and user access changing, especially since many organizations have introduced BYOD (Bring Your Own Device) strategies. That near-constant flux of change means that automating discovery is a must have.
With the proper tools, auditing capabilities and traffic monitoring systems, the risk of bad data is vastly reduced, allowing administrators to create baselines that are based upon reality and not assumptions. With a baseline created, the next step is to interpret the data and then weigh the risks presented.
Using tools to reduce risk
Tools are an important part of the risk interpretation process, and those tools must be easy to use and offer cues that highlight risk. Those cues may come in the form of visual analytics or integrated reporting. Regardless, administrators must be comfortable with the tool and be able to leverage the data collected in such a fashion that analysis can be accomplished; otherwise, good data goes bad quickly when it can't be understood.
Although remediation is an important part of risk management, remediation cannot be fully effective is risk is not accounted for properly. Here, risk vs. benefit must be weighed, ideally to provide an access policy that enables productivity, without exposing organizations to unnecessary risk. The real trick is to identify what is considered unnecessary risk, a definition that can change from organization to organization. One powerful ally comes in the form of leveraging baseline information to detect anomalies. With an accurate baseline, built upon reliable data, anomalies become much easier to detect. After all, if you know what normal looks like, then it becomes easy to spot abnormalities.
Of course, there is a financial argument to all of this as well. Risk can be expensive to contain; however, failure to acknowledge risk can be much more costly, especially when one considers the ramifications of a security breach. Building repositories of connection data, paired with monitoring baselines and detecting anomalies can help organizations to determine what is safe and what is not.