Security

Security talking point: Disconnecting threat from risk

Solutions--and more open communication--slowly changed the interactions between the security and user communities.

Over the last few years I have seen and participated in an elaborate balancing act between the twinned concepts of threat (the emotion) and risk (the analysis technique) as they apply to the relationships between IT and the internal user community. When the emotional responses to threat begin to outweigh, on one or both sides, the rational analysis of risk, the lines of communications between the groups break down.

A few years ago, I encountered an extreme example of communication breakdown when a client asked me, as a favor, to review a generic assessment produced for his security team by another firm. He hoped a fresh set of eyes would reveal something new.

After reading though the report and the accompanying data, something felt a bit off. When I compiled the data from their security team and user surveys, I noticed the numbers I came to did not match those in the report.

In order to resolve the discrepancy I asked for, and received, permission to contact the survey participants directly. As I called the participants, I noticed a disturbing trend. The users consistently used the words "over the top," "restrictive," "crazy," and "arrogant" to describe the security procedures. One user even used the phrase "prison guards treat people in solitary better than you treat us."

Problem profile: Threat and risk disconnect

In order to organize my survey results, I drew a table with two rows and two columns on a whiteboard. The rows were: "Security" and "Users." The columns were labeled: "Threats" and "Risks." The first column would hold data about the feelings of threat associated with the responses. The second would hold data regarding risks that showed thought placed into either probability of occurrence or impact severity. I then used a technique from anthropological qualitative analysis to break the survey results into their component phrases. These phrases became the data points populating the matrix.

The security team's threat box entries coalesced into two groups: internal threats and external threats. The first group revolved around the core concept that the users were actively and maliciously attempting to undermine perfectly sensible policies designed to protect them. The second group revolved around the idea that an unidentified hacker would intrude and do unspecified damage to the system.

In contrast, security's limited number of risk entries focused entirely around the possibility of internal sabotage. The word "sabotage" appeared in every single IT interview; not one person associated with IT brought up the idea of an unintentional security breach.

The users' threat entries focused on external threats and an adversarial relationship with the security team. They felt an honest dread of the "hackers and viruses" poised beyond the corporate fortress. At the same time, they felt the security team "preyed on their fears," "did not do any real work," and "lurked like vultures waiting for us to make a mistake so they can get us fired." In other words, they felt just as threatened by the security team as they did by the outside environment.

The users' limited risk entries showed some level of honest self-appraisal. They realized their own ignorance regarding the "why" of security and that IT tools gave them new and creative ways to strike back at the company when they became dissatisfied.

In both security's and the users' rows, threats received five times more mentions than actual risks. Taken just at face value, it seemed obvious that rational discourse between the two groups failed long ago. In its place I found anger, distrust, and emotive arguments posing as rules enforcement by all sides.

What can we do about it?

Addressing this problem would require more than just a shiny new technical toy and some consulting dollars. The two communities were no longer speaking rationally to one another. My client needed a plan to rebuild communications and trust before something went radically wrong. To do that we decided to address two specific problems: the threat assessments on both sides and the users' lack of knowledge.

Digging into the first problem, we discovered it stemmed form a logical and common root. Security teams and internal users do have conflicting goals when it comes to data access and use. This conflict can, and in this case did, spiral into a highly emotional threat reaction. When the feeling of threat took over, the team stopped treating their customers as allies in the security process. The customers naturally responded in kind, creating a cycle of miscommunication and continually escalating threat.

The second issue, the internal users' lack of detailed security knowledge, could give us a leverage point to solve the first problem. The internal users were not clueless idiots. They were, however, badly informed and taught to do whatever they had to in order to win. Any time security stepped on their toes without explaining how they were looking out for the internal users' best interest, they felt the security team wanted to sabotage their work.

In order to address these problems, my client implemented two technical and two social engineering solutions:

  • Established and advertised a security hotline manned by the more extroverted IT security personnel. This became part of the on-call duty rotation. People rarely called it, but its presence did a great deal to "prove" the security team wanted to help the users rather than interfere with them.
  • Built a confidential "security counseling" service, where managers who did not understand their organization's security requirements could get help after hours. Since knowledge is power, this gave the managers who took advantage of it a valuable commodity they could trade with other managers to further their own interests
  • Increased both the depth of e-mail monitoring and the helpfulness of the returned mail messages. The company already used mail filtering to prevent specific keywords from leaving the company servers. We expanded that service and added more detailed error messages under the assumption that educated users were less likely to cause long-term trouble. We also expanded the monitoring to allow for greater case-by-case tracking.
  • Increased the frequency and visibility of database security audits. Users proved surprisingly adept at finding loopholes in the security schemes implemented by the developers. Rather than punishing them for this, we rewarded people who reported problems with mentions in the company newsletter and gave them limited "friendly" access to the developers. This gave them a small status boost, one they could parlay into whatever endeavors they considered important.

These solutions slowly changed the interactions between the security and user communities. Each positive interaction reduced the level of threat that the two groups associated with one another. As the security team proved their competence and discretion, the internal user community extended more trust. As the security team learned more about the realities of business life, they began to accept that most security breaches really did come from mistakes and honest misunderstandings rather than insane malice.

Editor's Picks

Free Newsletters, In your Inbox