Security

When doing 'something' isn't better than 'nothing': Risk assessment steps

IBM security pro Jack Danahy warns that some incomplete measures may be more dangerous than doing nothing about security threats. Here are his recommendations for real risk assessment.

By Jack Danahy, Director for Advanced Security, IBM Security Systems

Security is, at best, an inexact science. Rhetorical chestnuts, like "Defense in Depth" and "No Silver Bullet", are used with numbing frequency because organizational vulnerability is broad, has diverse elements, and is a constantly moving target. Veterans know that perfect security is a phantom, and that the trick is to put multiple elements of awareness, process, and technology together in order to create a manageable patchwork that will approximate reasonable protection.

As new challenges arise, or as old one are resuscitated, organizations look for a new capability to add in order to do "something" about the new threat. Most times, these solutions are not really up to solving the entire problem, and there are whole sectors of our industry where the prevailing view is pretty myopic. But, at the end of the day, doing "something" is "better than nothing". Isn't it?

The sentiment feels right. It has the vibe of "I did my best", and it is usually more than sufficient to satisfy the weak tea of industrial and regulatory prescriptions. But, seriously, does the reality match the intent? Are organizations better off when, institutionally or experientially unable to address the real or complete issue, they choose to "do what they can do"?

I don't think so. Unless there is an uncommon (all right, never-before-seen ) transparency about the security conditions that are not met by a chosen solution, there are two immediate results to the selection of the incomplete solution. The first is that the pressure and organizational commitment to resolve the issue is diluted and effectively removed. The second is that there is seldom an analysis of what percentage of the original risk is actually being mitigated. The impact of these two effects is that the organization has now taken its institutional eye off of the ball, and it is feeling good about doing something. Does that sound like they have become more secure? What's the disconnect?

The reason why this is a concern is because the purpose of the solution is being defined inaccurately as defending against new threats, as opposed to understanding and providing security. As a result, defending against 20% of the threats is better than 0% of the threats. Unfortunately, eliminating 20% of the threats is not helpful if the other 80% are still more than sufficient to crater your organization.

Here's what I mean:

Let's use assessment techniques as an example, and use our own health as an analog for enterprise security. If I feel achy and hot, and it is getting hard to swallow, my first move may be to grab a thermometer and take my temperature. I do this to determine whether I have a fever or not. Similarly, if I am looking to check an application for vulnerability, I may run a simple scan, to look for some easily identified vulnerabilities. Both the thermometer and the scan will be effective tools to give me a relatively simple view of "health".

In the case of my fever, though, I will probably also start thinking about what that fever means: Have I eaten something I shouldn't have (food poisoning)? Do I have any recent injuries (infection)? Does my throat look irritated and raw (virus)? In order for that thermometer reading to have real diagnostic value, I need to move on from that simple assessment to a more detailed set of assessments, like a doctor's visit, a blood test and culture, or maybe an MRI. I don't just pop a couple of aspirin to reduce my fever and forget about it. And I definitely don't do that if I keep taking my temperature and I keep finding that I have a fever.

If we look at our other assessment example, though, looking at our application, what typically happens? In my experience, when vulnerabilities are found by a tool or service, the organization typically then focuses immediately on remediation. There tends not to be a consistent after-assessment process of more detailed assessment to look for other types of flaws, or root cause analysis to find out where the vulnerability was introduced. Some of these assessments are done as standalone service engagements, without appropriate consideration or usage models for comprehensive remediation. Because the simple assessment was viewed as "Better than Nothing", there is a sense that the system has been assessed, vulnerabilities have been found and fixed, and progress has been made.

Creating more reasons to stop after a simple assessment, this type of assessment is probably more than sufficient to satisfy section 6 of PCI or any general requirements for application security. There has been progress. In an absence of an appetite for introspection, this hurdle has been cleared, and it is time to move on.

It is this model that leads us to the current state, where real leaders in technology are still vulnerable to 20 year old techniques of SQL-injection and to accidental data leakage through unencrypted data transmission and storage. The simplicity and clarity of the assessment solution betrays the complexity of the actual organizational mission, which is to become more secure. Assessing software, systems, or practices, for a subset of the known issues, without making it unmistakably clear that this is a subset, leads to the continuing advancement and spread of the underlying risk and vulnerability. If I keep popping aspirin to lower my fever, and the problem is Strep, or appendicitis, eventually I'm going to be in serious trouble. So what is the answer?

I can hear the complaints now: I've conceded that security is, by its nature, a constant battle to improve because it is impossible to completely succeed. Then, having dragged you in, I have now told you that these half measures are more harm than good. What's the point?

The secret here is embracing the real capability and outputs of these partial solutions. We have a responsibility to understand what they can find, and what they can't. We have to maintain transparency with our organizations and ourselves on what we are looking for as well as what we are finding. In understanding the breadth of issues that we are choosing not to investigate, we do a far better job of understanding how much risk we are reducing.

To go back to our simple assessment example, here is a 5-step recommendation for actually making that effort valuable. Not just "Better than Nothing", but actually valuable.

Step 1 Identify the type of assessment you are trying to do. Is this a healthcheck, a "badness-ometer" as Gary McGraw used to call it? Or are you actually trying to assess whether the application is secure enough or not? If this is a healthcheck, be sure to communicate that. If this is intended to improve security, then move on to Step 2. Step 2

Define the security characteristics that are required in the software or system. Are you interested in enforcing the use of enablers (encryption, authentication, auditable logging ), validating the architecture ( use of common secured libraries, secure coding conventions ), or identifying errors ( buffer overflows, bad input validation )? As you conceive of your approach, simply specify what will be looked for, and as specifically, describe what will go undiscovered.

Step 3

Communicate the trade-offs that will be made in deciding on the approach. These will include the set of insecure conditions for which there will be no assessment and the rationale. There should be an attempt to produce reporting that will integrate the identification and trending of sought vulnerabilities with a consistent view of the unexamined waterfront. The reasons for not looking for problems are many, and justifiable. They can be expertise gaps, time crunch, budget pressure, or any of a dozen others.  The important thing is to get the limitations of the approach on the table, and to revisit it whenever broad discussions of security are held.

Step 4

Force transparency from your own partners. What will that tool identify? What kinds of issues will the services firm be looking for? How will the internal team leverage the processes or solutions that they are provided with? From any contributor, demand the same clarity that you will look to provide about, specifically, what they can't see and what their blindspots are. When this transparency happens, it is easier to discuss value, price, and expected returns.

Step 5

Embrace the aspirational nature of the finer-grained elements of security. We know that security, itself, is a complex puzzle that is addressed over a long period of time on multiple fronts. In much the same way that fractals are consistently repeated over any scale, develop a tolerance and enthusiasm for understanding and improving each of the elements of your security efforts. Break down the big topics, like Application Security, or Network Security, or ID management into components that are, themselves, measurable in this new and transparent methodology.

Root of our problems

As I was working on this piece, I am struck by how central this issue is to the continuing growth of security as a real concern. With the billions currently spent on security in all of its forms, it is hard to find any single area where risk has been eliminated. While we have focused on what we can eliminate with good tools and practices, those unexamined areas have grown and festered. Take responsibility for what has been accomplished, and what remains, and make sure you can defend your choices as "best for our organization", not "better than nothing."

About Jack Danahy:

Jack is responsible for integrating developing security risks and trends into IBM strategy, outreach, and product management. He is responsible for IBM's Institute for Advanced Security, and for increasing security awareness and traction within critical IBM industry sectors.  Prior to joining IBM in 2009, Jack was CEO and founder of two successful security software firms: Qiave Technologies, sold to Watchguard in 2000, and Ounce Labs, a security software firm acquired by IBM in 2009.  He holds 5 patents in a variety of security technologies, and is frequent and featured speaker and writer on topics of security, secure systems development, and the strategic balance between business needs and security controls.

1 comments
siefkasj
siefkasj

Please send this to Congress.

Editor's Picks